summaryrefslogtreecommitdiffstats
path: root/documentation/dev-manual
diff options
context:
space:
mode:
Diffstat (limited to 'documentation/dev-manual')
-rw-r--r--documentation/dev-manual/bmaptool.rst59
-rw-r--r--documentation/dev-manual/build-quality.rst409
-rw-r--r--documentation/dev-manual/building.rst942
-rw-r--r--documentation/dev-manual/custom-distribution.rst135
-rw-r--r--documentation/dev-manual/custom-template-configuration-directory.rst52
-rw-r--r--documentation/dev-manual/customizing-images.rst223
-rw-r--r--documentation/dev-manual/debugging.rst1271
-rw-r--r--documentation/dev-manual/dev-manual-common-tasks.xml16034
-rw-r--r--documentation/dev-manual/dev-manual-customization.xsl27
-rw-r--r--documentation/dev-manual/dev-manual-intro.xml103
-rw-r--r--documentation/dev-manual/dev-manual-qemu.xml690
-rw-r--r--documentation/dev-manual/dev-manual-start.xml1287
-rwxr-xr-xdocumentation/dev-manual/dev-manual.xml194
-rw-r--r--documentation/dev-manual/dev-style.css988
-rw-r--r--documentation/dev-manual/development-shell.rst82
-rw-r--r--documentation/dev-manual/device-manager.rst74
-rw-r--r--documentation/dev-manual/disk-space.rst61
-rw-r--r--documentation/dev-manual/efficiently-fetching-sources.rst68
-rw-r--r--documentation/dev-manual/error-reporting-tool.rst84
-rw-r--r--documentation/dev-manual/external-scm.rst67
-rw-r--r--documentation/dev-manual/external-toolchain.rst40
-rw-r--r--documentation/dev-manual/figures/cute-files-npm-example.pngbin26248 -> 73191 bytes
-rw-r--r--documentation/dev-manual/gobject-introspection.rst155
-rw-r--r--documentation/dev-manual/index.rst52
-rw-r--r--documentation/dev-manual/init-manager.rst162
-rw-r--r--documentation/dev-manual/intro.rst59
-rw-r--r--documentation/dev-manual/layers.rst919
-rw-r--r--documentation/dev-manual/libraries.rst267
-rw-r--r--documentation/dev-manual/licenses.rst544
-rw-r--r--documentation/dev-manual/new-machine.rst118
-rw-r--r--documentation/dev-manual/new-recipe.rst1639
-rw-r--r--documentation/dev-manual/packages.rst1250
-rw-r--r--documentation/dev-manual/prebuilt-libraries.rst209
-rw-r--r--documentation/dev-manual/python-development-shell.rst50
-rw-r--r--documentation/dev-manual/qemu.rst471
-rw-r--r--documentation/dev-manual/quilt.rst89
-rw-r--r--documentation/dev-manual/read-only-rootfs.rst89
-rw-r--r--documentation/dev-manual/runtime-testing.rst594
-rw-r--r--documentation/dev-manual/sbom.rst83
-rw-r--r--documentation/dev-manual/securing-images.rst156
-rw-r--r--documentation/dev-manual/security-subjects.rst189
-rw-r--r--documentation/dev-manual/speeding-up-build.rst109
-rw-r--r--documentation/dev-manual/start.rst855
-rw-r--r--documentation/dev-manual/temporary-source-code.rst66
-rw-r--r--documentation/dev-manual/upgrading-recipes.rst397
-rw-r--r--documentation/dev-manual/vulnerabilities.rst293
-rw-r--r--documentation/dev-manual/wayland.rst90
-rw-r--r--documentation/dev-manual/wic.rst731
-rw-r--r--documentation/dev-manual/x32-psabi.rst54
49 files changed, 13257 insertions, 19323 deletions
diff --git a/documentation/dev-manual/bmaptool.rst b/documentation/dev-manual/bmaptool.rst
new file mode 100644
index 0000000000..f6f0e6afaf
--- /dev/null
+++ b/documentation/dev-manual/bmaptool.rst
@@ -0,0 +1,59 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Flashing Images Using ``bmaptool``
+**********************************
+
+A fast and easy way to flash an image to a bootable device is to use
+bmaptool, which is integrated into the OpenEmbedded build system.
+bmaptool is a generic tool that creates a file's block map (bmap) and
+then uses that map to copy the file. As compared to traditional tools
+such as dd or cp, bmaptool can copy (or flash) large files like raw
+system image files much faster.
+
+.. note::
+
+ - If you are using Ubuntu or Debian distributions, you can install
+ the ``bmap-tools`` package using the following command and then
+ use the tool without specifying ``PATH`` even from the root
+ account::
+
+ $ sudo apt install bmap-tools
+
+ - If you are unable to install the ``bmap-tools`` package, you will
+ need to build bmaptool before using it. Use the following command::
+
+ $ bitbake bmaptool-native
+
+Following, is an example that shows how to flash a Wic image. Realize
+that while this example uses a Wic image, you can use bmaptool to flash
+any type of image. Use these steps to flash an image using bmaptool:
+
+#. *Update your local.conf File:* You need to have the following set
+ in your ``local.conf`` file before building your image::
+
+ IMAGE_FSTYPES += "wic wic.bmap"
+
+#. *Get Your Image:* Either have your image ready (pre-built with the
+ :term:`IMAGE_FSTYPES`
+ setting previously mentioned) or take the step to build the image::
+
+ $ bitbake image
+
+#. *Flash the Device:* Flash the device with the image by using bmaptool
+ depending on your particular setup. The following commands assume the
+ image resides in the :term:`Build Directory`'s ``deploy/images/`` area:
+
+ - If you have write access to the media, use this command form::
+
+ $ oe-run-native bmaptool-native bmaptool copy build-directory/tmp/deploy/images/machine/image.wic /dev/sdX
+
+ - If you do not have write access to the media, set your permissions
+ first and then use the same command form::
+
+ $ sudo chmod 666 /dev/sdX
+ $ oe-run-native bmaptool-native bmaptool copy build-directory/tmp/deploy/images/machine/image.wic /dev/sdX
+
+For help on the ``bmaptool`` command, use the following command::
+
+ $ bmaptool --help
+
diff --git a/documentation/dev-manual/build-quality.rst b/documentation/dev-manual/build-quality.rst
new file mode 100644
index 0000000000..713ea3a48e
--- /dev/null
+++ b/documentation/dev-manual/build-quality.rst
@@ -0,0 +1,409 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Maintaining Build Output Quality
+********************************
+
+Many factors can influence the quality of a build. For example, if you
+upgrade a recipe to use a new version of an upstream software package or
+you experiment with some new configuration options, subtle changes can
+occur that you might not detect until later. Consider the case where
+your recipe is using a newer version of an upstream package. In this
+case, a new version of a piece of software might introduce an optional
+dependency on another library, which is auto-detected. If that library
+has already been built when the software is building, the software will
+link to the built library and that library will be pulled into your
+image along with the new software even if you did not want the library.
+
+The :ref:`ref-classes-buildhistory` class helps you maintain the quality of
+your build output. You can use the class to highlight unexpected and possibly
+unwanted changes in the build output. When you enable build history, it records
+information about the contents of each package and image and then commits that
+information to a local Git repository where you can examine the information.
+
+The remainder of this section describes the following:
+
+- :ref:`How you can enable and disable build history <dev-manual/build-quality:enabling and disabling build history>`
+
+- :ref:`How to understand what the build history contains <dev-manual/build-quality:understanding what the build history contains>`
+
+- :ref:`How to limit the information used for build history <dev-manual/build-quality:using build history to gather image information only>`
+
+- :ref:`How to examine the build history from both a command-line and web interface <dev-manual/build-quality:examining build history information>`
+
+Enabling and Disabling Build History
+====================================
+
+Build history is disabled by default. To enable it, add the following
+:term:`INHERIT` statement and set the :term:`BUILDHISTORY_COMMIT` variable to
+"1" at the end of your ``conf/local.conf`` file found in the
+:term:`Build Directory`::
+
+ INHERIT += "buildhistory"
+ BUILDHISTORY_COMMIT = "1"
+
+Enabling build history as
+previously described causes the OpenEmbedded build system to collect
+build output information and commit it as a single commit to a local
+:ref:`overview-manual/development-environment:git` repository.
+
+.. note::
+
+ Enabling build history increases your build times slightly,
+ particularly for images, and increases the amount of disk space used
+ during the build.
+
+You can disable build history by removing the previous statements from
+your ``conf/local.conf`` file.
+
+Understanding What the Build History Contains
+=============================================
+
+Build history information is kept in ``${``\ :term:`TOPDIR`\ ``}/buildhistory``
+in the :term:`Build Directory` as defined by the :term:`BUILDHISTORY_DIR`
+variable. Here is an example abbreviated listing:
+
+.. image:: figures/buildhistory.png
+ :align: center
+ :width: 50%
+
+At the top level, there is a ``metadata-revs`` file that lists the
+revisions of the repositories for the enabled layers when the build was
+produced. The rest of the data splits into separate ``packages``,
+``images`` and ``sdk`` directories, the contents of which are described
+as follows.
+
+Build History Package Information
+---------------------------------
+
+The history for each package contains a text file that has name-value
+pairs with information about the package. For example,
+``buildhistory/packages/i586-poky-linux/busybox/busybox/latest``
+contains the following:
+
+.. code-block:: none
+
+ PV = 1.22.1
+ PR = r32
+ RPROVIDES =
+ RDEPENDS = glibc (>= 2.20) update-alternatives-opkg
+ RRECOMMENDS = busybox-syslog busybox-udhcpc update-rc.d
+ PKGSIZE = 540168
+ FILES = /usr/bin/* /usr/sbin/* /usr/lib/busybox/* /usr/lib/lib*.so.* \
+ /etc /com /var /bin/* /sbin/* /lib/*.so.* /lib/udev/rules.d \
+ /usr/lib/udev/rules.d /usr/share/busybox /usr/lib/busybox/* \
+ /usr/share/pixmaps /usr/share/applications /usr/share/idl \
+ /usr/share/omf /usr/share/sounds /usr/lib/bonobo/servers
+ FILELIST = /bin/busybox /bin/busybox.nosuid /bin/busybox.suid /bin/sh \
+ /etc/busybox.links.nosuid /etc/busybox.links.suid
+
+Most of these
+name-value pairs correspond to variables used to produce the package.
+The exceptions are ``FILELIST``, which is the actual list of files in
+the package, and ``PKGSIZE``, which is the total size of files in the
+package in bytes.
+
+There is also a file that corresponds to the recipe from which the package
+came (e.g. ``buildhistory/packages/i586-poky-linux/busybox/latest``):
+
+.. code-block:: none
+
+ PV = 1.22.1
+ PR = r32
+ DEPENDS = initscripts kern-tools-native update-rc.d-native \
+ virtual/i586-poky-linux-compilerlibs virtual/i586-poky-linux-gcc \
+ virtual/libc virtual/update-alternatives
+ PACKAGES = busybox-ptest busybox-httpd busybox-udhcpd busybox-udhcpc \
+ busybox-syslog busybox-mdev busybox-hwclock busybox-dbg \
+ busybox-staticdev busybox-dev busybox-doc busybox-locale busybox
+
+Finally, for those recipes fetched from a version control system (e.g.,
+Git), there is a file that lists source revisions that are specified in
+the recipe and the actual revisions used during the build. Listed
+and actual revisions might differ when
+:term:`SRCREV` is set to
+${:term:`AUTOREV`}. Here is an
+example assuming
+``buildhistory/packages/qemux86-poky-linux/linux-yocto/latest_srcrev``)::
+
+ # SRCREV_machine = "38cd560d5022ed2dbd1ab0dca9642e47c98a0aa1"
+ SRCREV_machine = "38cd560d5022ed2dbd1ab0dca9642e47c98a0aa1"
+ # SRCREV_meta = "a227f20eff056e511d504b2e490f3774ab260d6f"
+ SRCREV_meta ="a227f20eff056e511d504b2e490f3774ab260d6f"
+
+You can use the
+``buildhistory-collect-srcrevs`` command with the ``-a`` option to
+collect the stored :term:`SRCREV` values from build history and report them
+in a format suitable for use in global configuration (e.g.,
+``local.conf`` or a distro include file) to override floating
+:term:`AUTOREV` values to a fixed set of revisions. Here is some example
+output from this command::
+
+ $ buildhistory-collect-srcrevs -a
+ # all-poky-linux
+ SRCREV:pn-ca-certificates = "07de54fdcc5806bde549e1edf60738c6bccf50e8"
+ SRCREV:pn-update-rc.d = "8636cf478d426b568c1be11dbd9346f67e03adac"
+ # core2-64-poky-linux
+ SRCREV:pn-binutils = "87d4632d36323091e731eb07b8aa65f90293da66"
+ SRCREV:pn-btrfs-tools = "8ad326b2f28c044cb6ed9016d7c3285e23b673c8"
+ SRCREV_bzip2-tests:pn-bzip2 = "f9061c030a25de5b6829e1abf373057309c734c0"
+ SRCREV:pn-e2fsprogs = "02540dedd3ddc52c6ae8aaa8a95ce75c3f8be1c0"
+ SRCREV:pn-file = "504206e53a89fd6eed71aeaf878aa3512418eab1"
+ SRCREV_glibc:pn-glibc = "24962427071fa532c3c48c918e9d64d719cc8a6c"
+ SRCREV:pn-gnome-desktop-testing = "e346cd4ed2e2102c9b195b614f3c642d23f5f6e7"
+ SRCREV:pn-init-system-helpers = "dbd9197569c0935029acd5c9b02b84c68fd937ee"
+ SRCREV:pn-kmod = "b6ecfc916a17eab8f93be5b09f4e4f845aabd3d1"
+ SRCREV:pn-libnsl2 = "82245c0c58add79a8e34ab0917358217a70e5100"
+ SRCREV:pn-libseccomp = "57357d2741a3b3d3e8425889a6b79a130e0fa2f3"
+ SRCREV:pn-libxcrypt = "50cf2b6dd4fdf04309445f2eec8de7051d953abf"
+ SRCREV:pn-ncurses = "51d0fd9cc3edb975f04224f29f777f8f448e8ced"
+ SRCREV:pn-procps = "19a508ea121c0c4ac6d0224575a036de745eaaf8"
+ SRCREV:pn-psmisc = "5fab6b7ab385080f1db725d6803136ec1841a15f"
+ SRCREV:pn-ptest-runner = "bcb82804daa8f725b6add259dcef2067e61a75aa"
+ SRCREV:pn-shared-mime-info = "18e558fa1c8b90b86757ade09a4ba4d6a6cf8f70"
+ SRCREV:pn-zstd = "e47e674cd09583ff0503f0f6defd6d23d8b718d3"
+ # qemux86_64-poky-linux
+ SRCREV_machine:pn-linux-yocto = "20301aeb1a64164b72bc72af58802b315e025c9c"
+ SRCREV_meta:pn-linux-yocto = "2d38a472b21ae343707c8bd64ac68a9eaca066a0"
+ # x86_64-linux
+ SRCREV:pn-binutils-cross-x86_64 = "87d4632d36323091e731eb07b8aa65f90293da66"
+ SRCREV_glibc:pn-cross-localedef-native = "24962427071fa532c3c48c918e9d64d719cc8a6c"
+ SRCREV_localedef:pn-cross-localedef-native = "794da69788cbf9bf57b59a852f9f11307663fa87"
+ SRCREV:pn-debianutils-native = "de14223e5bffe15e374a441302c528ffc1cbed57"
+ SRCREV:pn-libmodulemd-native = "ee80309bc766d781a144e6879419b29f444d94eb"
+ SRCREV:pn-virglrenderer-native = "363915595e05fb252e70d6514be2f0c0b5ca312b"
+ SRCREV:pn-zstd-native = "e47e674cd09583ff0503f0f6defd6d23d8b718d3"
+
+.. note::
+
+ Here are some notes on using the ``buildhistory-collect-srcrevs`` command:
+
+ - By default, only values where the :term:`SRCREV` was not hardcoded
+ (usually when :term:`AUTOREV` is used) are reported. Use the ``-a``
+ option to see all :term:`SRCREV` values.
+
+ - The output statements might not have any effect if overrides are
+ applied elsewhere in the build system configuration. Use the
+ ``-f`` option to add the ``forcevariable`` override to each output
+ line if you need to work around this restriction.
+
+ - The script does apply special handling when building for multiple
+ machines. However, the script does place a comment before each set
+ of values that specifies which triplet to which they belong as
+ previously shown (e.g., ``i586-poky-linux``).
+
+Build History Image Information
+-------------------------------
+
+The files produced for each image are as follows:
+
+- ``image-files:`` A directory containing selected files from the root
+ filesystem. The files are defined by
+ :term:`BUILDHISTORY_IMAGE_FILES`.
+
+- ``build-id.txt:`` Human-readable information about the build
+ configuration and metadata source revisions. This file contains the
+ full build header as printed by BitBake.
+
+- ``*.dot:`` Dependency graphs for the image that are compatible with
+ ``graphviz``.
+
+- ``files-in-image.txt:`` A list of files in the image with
+ permissions, owner, group, size, and symlink information.
+
+- ``image-info.txt:`` A text file containing name-value pairs with
+ information about the image. See the following listing example for
+ more information.
+
+- ``installed-package-names.txt:`` A list of installed packages by name
+ only.
+
+- ``installed-package-sizes.txt:`` A list of installed packages ordered
+ by size.
+
+- ``installed-packages.txt:`` A list of installed packages with full
+ package filenames.
+
+.. note::
+
+ Installed package information is able to be gathered and produced
+ even if package management is disabled for the final image.
+
+Here is an example of ``image-info.txt``:
+
+.. code-block:: none
+
+ DISTRO = poky
+ DISTRO_VERSION = 3.4+snapshot-a0245d7be08f3d24ea1875e9f8872aa6bbff93be
+ USER_CLASSES = buildstats
+ IMAGE_CLASSES = qemuboot qemuboot license_image
+ IMAGE_FEATURES = debug-tweaks
+ IMAGE_LINGUAS =
+ IMAGE_INSTALL = packagegroup-core-boot speex speexdsp
+ BAD_RECOMMENDATIONS =
+ NO_RECOMMENDATIONS =
+ PACKAGE_EXCLUDE =
+ ROOTFS_POSTPROCESS_COMMAND = write_package_manifest; license_create_manifest; cve_check_write_rootfs_manifest; ssh_allow_empty_password; ssh_allow_root_login; postinst_enable_logging; rootfs_update_timestamp; write_image_test_data; empty_var_volatile; sort_passwd; rootfs_reproducible;
+ IMAGE_POSTPROCESS_COMMAND = buildhistory_get_imageinfo ;
+ IMAGESIZE = 9265
+
+Other than ``IMAGESIZE``,
+which is the total size of the files in the image in Kbytes, the
+name-value pairs are variables that may have influenced the content of
+the image. This information is often useful when you are trying to
+determine why a change in the package or file listings has occurred.
+
+Using Build History to Gather Image Information Only
+----------------------------------------------------
+
+As you can see, build history produces image information, including
+dependency graphs, so you can see why something was pulled into the
+image. If you are just interested in this information and not interested
+in collecting specific package or SDK information, you can enable
+writing only image information without any history by adding the
+following to your ``conf/local.conf`` file found in the
+:term:`Build Directory`::
+
+ INHERIT += "buildhistory"
+ BUILDHISTORY_COMMIT = "0"
+ BUILDHISTORY_FEATURES = "image"
+
+Here, you set the
+:term:`BUILDHISTORY_FEATURES`
+variable to use the image feature only.
+
+Build History SDK Information
+-----------------------------
+
+Build history collects similar information on the contents of SDKs (e.g.
+``bitbake -c populate_sdk imagename``) as compared to information it
+collects for images. Furthermore, this information differs depending on
+whether an extensible or standard SDK is being produced.
+
+The following list shows the files produced for SDKs:
+
+- ``files-in-sdk.txt:`` A list of files in the SDK with permissions,
+ owner, group, size, and symlink information. This list includes both
+ the host and target parts of the SDK.
+
+- ``sdk-info.txt:`` A text file containing name-value pairs with
+ information about the SDK. See the following listing example for more
+ information.
+
+- ``sstate-task-sizes.txt:`` A text file containing name-value pairs
+ with information about task group sizes (e.g. :ref:`ref-tasks-populate_sysroot`
+ tasks have a total size). The ``sstate-task-sizes.txt`` file exists
+ only when an extensible SDK is created.
+
+- ``sstate-package-sizes.txt:`` A text file containing name-value pairs
+ with information for the shared-state packages and sizes in the SDK.
+ The ``sstate-package-sizes.txt`` file exists only when an extensible
+ SDK is created.
+
+- ``sdk-files:`` A folder that contains copies of the files mentioned
+ in ``BUILDHISTORY_SDK_FILES`` if the files are present in the output.
+ Additionally, the default value of ``BUILDHISTORY_SDK_FILES`` is
+ specific to the extensible SDK although you can set it differently if
+ you would like to pull in specific files from the standard SDK.
+
+ The default files are ``conf/local.conf``, ``conf/bblayers.conf``,
+ ``conf/auto.conf``, ``conf/locked-sigs.inc``, and
+ ``conf/devtool.conf``. Thus, for an extensible SDK, these files get
+ copied into the ``sdk-files`` directory.
+
+- The following information appears under each of the ``host`` and
+ ``target`` directories for the portions of the SDK that run on the
+ host and on the target, respectively:
+
+ .. note::
+
+ The following files for the most part are empty when producing an
+ extensible SDK because this type of SDK is not constructed from
+ packages as is the standard SDK.
+
+ - ``depends.dot:`` Dependency graph for the SDK that is compatible
+ with ``graphviz``.
+
+ - ``installed-package-names.txt:`` A list of installed packages by
+ name only.
+
+ - ``installed-package-sizes.txt:`` A list of installed packages
+ ordered by size.
+
+ - ``installed-packages.txt:`` A list of installed packages with full
+ package filenames.
+
+Here is an example of ``sdk-info.txt``:
+
+.. code-block:: none
+
+ DISTRO = poky
+ DISTRO_VERSION = 1.3+snapshot-20130327
+ SDK_NAME = poky-glibc-i686-arm
+ SDK_VERSION = 1.3+snapshot
+ SDKMACHINE =
+ SDKIMAGE_FEATURES = dev-pkgs dbg-pkgs
+ BAD_RECOMMENDATIONS =
+ SDKSIZE = 352712
+
+Other than ``SDKSIZE``, which is
+the total size of the files in the SDK in Kbytes, the name-value pairs
+are variables that might have influenced the content of the SDK. This
+information is often useful when you are trying to determine why a
+change in the package or file listings has occurred.
+
+Examining Build History Information
+-----------------------------------
+
+You can examine build history output from the command line or from a web
+interface.
+
+To see any changes that have occurred (assuming you have
+:term:`BUILDHISTORY_COMMIT` = "1"),
+you can simply use any Git command that allows you to view the history
+of a repository. Here is one method::
+
+ $ git log -p
+
+You need to realize,
+however, that this method does show changes that are not significant
+(e.g. a package's size changing by a few bytes).
+
+There is a command-line tool called ``buildhistory-diff``, though,
+that queries the Git repository and prints just the differences that
+might be significant in human-readable form. Here is an example::
+
+ $ poky/poky/scripts/buildhistory-diff . HEAD^
+ Changes to images/qemux86_64/glibc/core-image-minimal (files-in-image.txt):
+ /etc/anotherpkg.conf was added
+ /sbin/anotherpkg was added
+ * (installed-package-names.txt):
+ * anotherpkg was added
+ Changes to images/qemux86_64/glibc/core-image-minimal (installed-package-names.txt):
+ anotherpkg was added
+ packages/qemux86_64-poky-linux/v86d: PACKAGES: added "v86d-extras"
+ * PR changed from "r0" to "r1"
+ * PV changed from "0.1.10" to "0.1.12"
+ packages/qemux86_64-poky-linux/v86d/v86d: PKGSIZE changed from 110579 to 144381 (+30%)
+ * PR changed from "r0" to "r1"
+ * PV changed from "0.1.10" to "0.1.12"
+
+.. note::
+
+ The ``buildhistory-diff`` tool requires the ``GitPython``
+ package. Be sure to install it using Pip3 as follows::
+
+ $ pip3 install GitPython --user
+
+
+ Alternatively, you can install ``python3-git`` using the appropriate
+ distribution package manager (e.g. ``apt``, ``dnf``, or ``zipper``).
+
+To see changes to the build history using a web interface, follow the
+instruction in the ``README`` file
+:yocto_git:`here </buildhistory-web/>`.
+
+Here is a sample screenshot of the interface:
+
+.. image:: figures/buildhistory-web.png
+ :width: 100%
+
diff --git a/documentation/dev-manual/building.rst b/documentation/dev-manual/building.rst
new file mode 100644
index 0000000000..fe502690dd
--- /dev/null
+++ b/documentation/dev-manual/building.rst
@@ -0,0 +1,942 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Building
+********
+
+This section describes various build procedures, such as the steps
+needed for a simple build, building a target for multiple configurations,
+generating an image for more than one machine, and so forth.
+
+Building a Simple Image
+=======================
+
+In the development environment, you need to build an image whenever you
+change hardware support, add or change system libraries, or add or
+change services that have dependencies. There are several methods that allow
+you to build an image within the Yocto Project. This section presents
+the basic steps you need to build a simple image using BitBake from a
+build host running Linux.
+
+.. note::
+
+ - For information on how to build an image using
+ :term:`Toaster`, see the
+ :doc:`/toaster-manual/index`.
+
+ - For information on how to use ``devtool`` to build images, see the
+ ":ref:`sdk-manual/extensible:using \`\`devtool\`\` in your sdk workflow`"
+ section in the Yocto Project Application Development and the
+ Extensible Software Development Kit (eSDK) manual.
+
+ - For a quick example on how to build an image using the
+ OpenEmbedded build system, see the
+ :doc:`/brief-yoctoprojectqs/index` document.
+
+ - You can also use the `Yocto Project BitBake
+ <https://marketplace.visualstudio.com/items?itemName=yocto-project.yocto-bitbake>`__
+ extension for Visual Studio Code to build images.
+
+The build process creates an entire Linux distribution from source and
+places it in your :term:`Build Directory` under ``tmp/deploy/images``. For
+detailed information on the build process using BitBake, see the
+":ref:`overview-manual/concepts:images`" section in the Yocto Project Overview
+and Concepts Manual.
+
+The following figure and list overviews the build process:
+
+.. image:: figures/bitbake-build-flow.png
+ :width: 100%
+
+#. *Set up Your Host Development System to Support Development Using the
+ Yocto Project*: See the ":doc:`start`" section for options on how to get a
+ build host ready to use the Yocto Project.
+
+#. *Initialize the Build Environment:* Initialize the build environment
+ by sourcing the build environment script (i.e.
+ :ref:`structure-core-script`)::
+
+ $ source oe-init-build-env [build_dir]
+
+ When you use the initialization script, the OpenEmbedded build system
+ uses ``build`` as the default :term:`Build Directory` in your current work
+ directory. You can use a `build_dir` argument with the script to
+ specify a different :term:`Build Directory`.
+
+ .. note::
+
+ A common practice is to use a different :term:`Build Directory` for
+ different targets; for example, ``~/build/x86`` for a ``qemux86``
+ target, and ``~/build/arm`` for a ``qemuarm`` target. In any
+ event, it's typically cleaner to locate the :term:`Build Directory`
+ somewhere outside of your source directory.
+
+#. *Make Sure Your* ``local.conf`` *File is Correct*: Ensure the
+ ``conf/local.conf`` configuration file, which is found in the
+ :term:`Build Directory`, is set up how you want it. This file defines many
+ aspects of the build environment including the target machine architecture
+ through the :term:`MACHINE` variable, the packaging format used during
+ the build (:term:`PACKAGE_CLASSES`), and a centralized tarball download
+ directory through the :term:`DL_DIR` variable.
+
+#. *Build the Image:* Build the image using the ``bitbake`` command::
+
+ $ bitbake target
+
+ .. note::
+
+ For information on BitBake, see the :doc:`bitbake:index`.
+
+ The target is the name of the recipe you want to build. Common
+ targets are the images in ``meta/recipes-core/images``,
+ ``meta/recipes-sato/images``, and so forth all found in the
+ :term:`Source Directory`. Alternatively, the target
+ can be the name of a recipe for a specific piece of software such as
+ BusyBox. For more details about the images the OpenEmbedded build
+ system supports, see the
+ ":ref:`ref-manual/images:Images`" chapter in the Yocto
+ Project Reference Manual.
+
+ As an example, the following command builds the
+ ``core-image-minimal`` image::
+
+ $ bitbake core-image-minimal
+
+ Once an
+ image has been built, it often needs to be installed. The images and
+ kernels built by the OpenEmbedded build system are placed in the
+ :term:`Build Directory` in ``tmp/deploy/images``. For information on how to
+ run pre-built images such as ``qemux86`` and ``qemuarm``, see the
+ :doc:`/sdk-manual/index` manual. For
+ information about how to install these images, see the documentation
+ for your particular board or machine.
+
+Building Images for Multiple Targets Using Multiple Configurations
+==================================================================
+
+You can use a single ``bitbake`` command to build multiple images or
+packages for different targets where each image or package requires a
+different configuration (multiple configuration builds). The builds, in
+this scenario, are sometimes referred to as "multiconfigs", and this
+section uses that term throughout.
+
+This section describes how to set up for multiple configuration builds
+and how to account for cross-build dependencies between the
+multiconfigs.
+
+Setting Up and Running a Multiple Configuration Build
+-----------------------------------------------------
+
+To accomplish a multiple configuration build, you must define each
+target's configuration separately using a parallel configuration file in
+the :term:`Build Directory` or configuration directory within a layer, and you
+must follow a required file hierarchy. Additionally, you must enable the
+multiple configuration builds in your ``local.conf`` file.
+
+Follow these steps to set up and execute multiple configuration builds:
+
+- *Create Separate Configuration Files*: You need to create a single
+ configuration file for each build target (each multiconfig).
+ The configuration definitions are implementation dependent but often
+ each configuration file will define the machine and the
+ temporary directory BitBake uses for the build. Whether the same
+ temporary directory (:term:`TMPDIR`) can be shared will depend on what is
+ similar and what is different between the configurations. Multiple MACHINE
+ targets can share the same (:term:`TMPDIR`) as long as the rest of the
+ configuration is the same, multiple :term:`DISTRO` settings would need separate
+ (:term:`TMPDIR`) directories.
+
+ For example, consider a scenario with two different multiconfigs for the same
+ :term:`MACHINE`: "qemux86" built
+ for two distributions such as "poky" and "poky-lsb". In this case,
+ you would need to use the different :term:`TMPDIR`.
+
+ Here is an example showing the minimal statements needed in a
+ configuration file for a "qemux86" target whose temporary build
+ directory is ``tmpmultix86``::
+
+ MACHINE = "qemux86"
+ TMPDIR = "${TOPDIR}/tmpmultix86"
+
+ The location for these multiconfig configuration files is specific.
+ They must reside in the current :term:`Build Directory` in a sub-directory of
+ ``conf`` named ``multiconfig`` or within a layer's ``conf`` directory
+ under a directory named ``multiconfig``. Here is an example that defines
+ two configuration files for the "x86" and "arm" multiconfigs:
+
+ .. image:: figures/multiconfig_files.png
+ :align: center
+ :width: 50%
+
+ The usual :term:`BBPATH` search path is used to locate multiconfig files in
+ a similar way to other conf files.
+
+- *Add the BitBake Multi-configuration Variable to the Local
+ Configuration File*: Use the
+ :term:`BBMULTICONFIG`
+ variable in your ``conf/local.conf`` configuration file to specify
+ each multiconfig. Continuing with the example from the previous
+ figure, the :term:`BBMULTICONFIG` variable needs to enable two
+ multiconfigs: "x86" and "arm" by specifying each configuration file::
+
+ BBMULTICONFIG = "x86 arm"
+
+ .. note::
+
+ A "default" configuration already exists by definition. This
+ configuration is named: "" (i.e. empty string) and is defined by
+ the variables coming from your ``local.conf``
+ file. Consequently, the previous example actually adds two
+ additional configurations to your build: "arm" and "x86" along
+ with "".
+
+- *Launch BitBake*: Use the following BitBake command form to launch
+ the multiple configuration build::
+
+ $ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ]
+
+ For the example in this section, the following command applies::
+
+ $ bitbake mc:x86:core-image-minimal mc:arm:core-image-sato mc::core-image-base
+
+ The previous BitBake command builds a ``core-image-minimal`` image
+ that is configured through the ``x86.conf`` configuration file, a
+ ``core-image-sato`` image that is configured through the ``arm.conf``
+ configuration file and a ``core-image-base`` that is configured
+ through your ``local.conf`` configuration file.
+
+.. note::
+
+ Support for multiple configuration builds in the Yocto Project &DISTRO;
+ (&DISTRO_NAME;) Release does not include Shared State (sstate)
+ optimizations. Consequently, if a build uses the same object twice
+ in, for example, two different :term:`TMPDIR`
+ directories, the build either loads from an existing sstate cache for
+ that build at the start or builds the object fresh.
+
+Enabling Multiple Configuration Build Dependencies
+--------------------------------------------------
+
+Sometimes dependencies can exist between targets (multiconfigs) in a
+multiple configuration build. For example, suppose that in order to
+build a ``core-image-sato`` image for an "x86" multiconfig, the root
+filesystem of an "arm" multiconfig must exist. This dependency is
+essentially that the
+:ref:`ref-tasks-image` task in the
+``core-image-sato`` recipe depends on the completion of the
+:ref:`ref-tasks-rootfs` task of the
+``core-image-minimal`` recipe.
+
+To enable dependencies in a multiple configuration build, you must
+declare the dependencies in the recipe using the following statement
+form::
+
+ task_or_package[mcdepends] = "mc:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend"
+
+To better show how to use this statement, consider the example scenario
+from the first paragraph of this section. The following statement needs
+to be added to the recipe that builds the ``core-image-sato`` image::
+
+ do_image[mcdepends] = "mc:x86:arm:core-image-minimal:do_rootfs"
+
+In this example, the `from_multiconfig` is "x86". The `to_multiconfig` is "arm". The
+task on which the :ref:`ref-tasks-image` task in the recipe depends is the
+:ref:`ref-tasks-rootfs` task from the ``core-image-minimal`` recipe associated
+with the "arm" multiconfig.
+
+Once you set up this dependency, you can build the "x86" multiconfig
+using a BitBake command as follows::
+
+ $ bitbake mc:x86:core-image-sato
+
+This command executes all the tasks needed to create the
+``core-image-sato`` image for the "x86" multiconfig. Because of the
+dependency, BitBake also executes through the :ref:`ref-tasks-rootfs` task for the
+"arm" multiconfig build.
+
+Having a recipe depend on the root filesystem of another build might not
+seem that useful. Consider this change to the statement in the
+``core-image-sato`` recipe::
+
+ do_image[mcdepends] = "mc:x86:arm:core-image-minimal:do_image"
+
+In this case, BitBake must
+create the ``core-image-minimal`` image for the "arm" build since the
+"x86" build depends on it.
+
+Because "x86" and "arm" are enabled for multiple configuration builds
+and have separate configuration files, BitBake places the artifacts for
+each build in the respective temporary build directories (i.e.
+:term:`TMPDIR`).
+
+Building an Initial RAM Filesystem (Initramfs) Image
+====================================================
+
+An initial RAM filesystem (:term:`Initramfs`) image provides a temporary root
+filesystem used for early system initialization, typically providing tools and
+loading modules needed to locate and mount the final root filesystem.
+
+Follow these steps to create an :term:`Initramfs` image:
+
+#. *Create the Initramfs Image Recipe:* You can reference the
+ ``core-image-minimal-initramfs.bb`` recipe found in the
+ ``meta/recipes-core`` directory of the :term:`Source Directory`
+ as an example from which to work.
+
+#. *Decide if You Need to Bundle the Initramfs Image Into the Kernel
+ Image:* If you want the :term:`Initramfs` image that is built to be bundled
+ in with the kernel image, set the :term:`INITRAMFS_IMAGE_BUNDLE`
+ variable to ``"1"`` in your ``local.conf`` configuration file and set the
+ :term:`INITRAMFS_IMAGE` variable in the recipe that builds the kernel image.
+
+ Setting the :term:`INITRAMFS_IMAGE_BUNDLE` flag causes the :term:`Initramfs`
+ image to be unpacked into the ``${B}/usr/`` directory. The unpacked
+ :term:`Initramfs` image is then passed to the kernel's ``Makefile`` using the
+ :term:`CONFIG_INITRAMFS_SOURCE` variable, allowing the :term:`Initramfs`
+ image to be built into the kernel normally.
+
+#. *Optionally Add Items to the Initramfs Image Through the Initramfs
+ Image Recipe:* If you add items to the :term:`Initramfs` image by way of its
+ recipe, you should use :term:`PACKAGE_INSTALL` rather than
+ :term:`IMAGE_INSTALL`. :term:`PACKAGE_INSTALL` gives more direct control of
+ what is added to the image as compared to the defaults you might not
+ necessarily want that are set by the :ref:`ref-classes-image`
+ or :ref:`ref-classes-core-image` classes.
+
+#. *Build the Kernel Image and the Initramfs Image:* Build your kernel
+ image using BitBake. Because the :term:`Initramfs` image recipe is a
+ dependency of the kernel image, the :term:`Initramfs` image is built as well
+ and bundled with the kernel image if you used the
+ :term:`INITRAMFS_IMAGE_BUNDLE` variable described earlier.
+
+Bundling an Initramfs Image From a Separate Multiconfig
+-------------------------------------------------------
+
+There may be a case where we want to build an :term:`Initramfs` image which does not
+inherit the same distro policy as our main image, for example, we may want
+our main image to use ``TCLIBC="glibc"``, but to use ``TCLIBC="musl"`` in our :term:`Initramfs`
+image to keep a smaller footprint. However, by performing the steps mentioned
+above the :term:`Initramfs` image will inherit ``TCLIBC="glibc"`` without allowing us
+to override it.
+
+To achieve this, you need to perform some additional steps:
+
+#. *Create a multiconfig for your Initramfs image:* You can perform the steps
+ on ":ref:`dev-manual/building:building images for multiple targets using multiple configurations`" to create a separate multiconfig.
+ For the sake of simplicity let's assume such multiconfig is called: ``initramfscfg.conf`` and
+ contains the variables::
+
+ TMPDIR="${TOPDIR}/tmp-initramfscfg"
+ TCLIBC="musl"
+
+#. *Set additional Initramfs variables on your main configuration:*
+ Additionally, on your main configuration (``local.conf``) you need to set the
+ variables::
+
+ INITRAMFS_MULTICONFIG = "initramfscfg"
+ INITRAMFS_DEPLOY_DIR_IMAGE = "${TOPDIR}/tmp-initramfscfg/deploy/images/${MACHINE}"
+
+ The variables :term:`INITRAMFS_MULTICONFIG` and :term:`INITRAMFS_DEPLOY_DIR_IMAGE`
+ are used to create a multiconfig dependency from the kernel to the :term:`INITRAMFS_IMAGE`
+ to be built coming from the ``initramfscfg`` multiconfig, and to let the
+ buildsystem know where the :term:`INITRAMFS_IMAGE` will be located.
+
+ Building a system with such configuration will build the kernel using the
+ main configuration but the :ref:`ref-tasks-bundle_initramfs` task will grab the
+ selected :term:`INITRAMFS_IMAGE` from :term:`INITRAMFS_DEPLOY_DIR_IMAGE`
+ instead, resulting in a musl based :term:`Initramfs` image bundled in the kernel
+ but a glibc based main image.
+
+ The same is applicable to avoid inheriting :term:`DISTRO_FEATURES` on :term:`INITRAMFS_IMAGE`
+ or to build a different :term:`DISTRO` for it such as ``poky-tiny``.
+
+
+Building a Tiny System
+======================
+
+Very small distributions have some significant advantages such as
+requiring less on-die or in-package memory (cheaper), better performance
+through efficient cache usage, lower power requirements due to less
+memory, faster boot times, and reduced development overhead. Some
+real-world examples where a very small distribution gives you distinct
+advantages are digital cameras, medical devices, and small headless
+systems.
+
+This section presents information that shows you how you can trim your
+distribution to even smaller sizes than the ``poky-tiny`` distribution,
+which is around 5 Mbytes, that can be built out-of-the-box using the
+Yocto Project.
+
+Tiny System Overview
+--------------------
+
+The following list presents the overall steps you need to consider and
+perform to create distributions with smaller root filesystems, achieve
+faster boot times, maintain your critical functionality, and avoid
+initial RAM disks:
+
+- :ref:`Determine your goals and guiding principles
+ <dev-manual/building:goals and guiding principles>`
+
+- :ref:`dev-manual/building:understand what contributes to your image size`
+
+- :ref:`Reduce the size of the root filesystem
+ <dev-manual/building:trim the root filesystem>`
+
+- :ref:`Reduce the size of the kernel <dev-manual/building:trim the kernel>`
+
+- :ref:`dev-manual/building:remove package management requirements`
+
+- :ref:`dev-manual/building:look for other ways to minimize size`
+
+- :ref:`dev-manual/building:iterate on the process`
+
+Goals and Guiding Principles
+----------------------------
+
+Before you can reach your destination, you need to know where you are
+going. Here is an example list that you can use as a guide when creating
+very small distributions:
+
+- Determine how much space you need (e.g. a kernel that is 1 Mbyte or
+ less and a root filesystem that is 3 Mbytes or less).
+
+- Find the areas that are currently taking 90% of the space and
+ concentrate on reducing those areas.
+
+- Do not create any difficult "hacks" to achieve your goals.
+
+- Leverage the device-specific options.
+
+- Work in a separate layer so that you keep changes isolated. For
+ information on how to create layers, see the
+ ":ref:`dev-manual/layers:understanding and creating layers`" section.
+
+Understand What Contributes to Your Image Size
+----------------------------------------------
+
+It is easiest to have something to start with when creating your own
+distribution. You can use the Yocto Project out-of-the-box to create the
+``poky-tiny`` distribution. Ultimately, you will want to make changes in
+your own distribution that are likely modeled after ``poky-tiny``.
+
+.. note::
+
+ To use ``poky-tiny`` in your build, set the :term:`DISTRO` variable in your
+ ``local.conf`` file to "poky-tiny" as described in the
+ ":ref:`dev-manual/custom-distribution:creating your own distribution`"
+ section.
+
+Understanding some memory concepts will help you reduce the system size.
+Memory consists of static, dynamic, and temporary memory. Static memory
+is the TEXT (code), DATA (initialized data in the code), and BSS
+(uninitialized data) sections. Dynamic memory represents memory that is
+allocated at runtime: stacks, hash tables, and so forth. Temporary
+memory is recovered after the boot process. This memory consists of
+memory used for decompressing the kernel and for the ``__init__``
+functions.
+
+To help you see where you currently are with kernel and root filesystem
+sizes, you can use two tools found in the :term:`Source Directory`
+in the
+``scripts/tiny/`` directory:
+
+- ``ksize.py``: Reports component sizes for the kernel build objects.
+
+- ``dirsize.py``: Reports component sizes for the root filesystem.
+
+This next tool and command help you organize configuration fragments and
+view file dependencies in a human-readable form:
+
+- ``merge_config.sh``: Helps you manage configuration files and
+ fragments within the kernel. With this tool, you can merge individual
+ configuration fragments together. The tool allows you to make
+ overrides and warns you of any missing configuration options. The
+ tool is ideal for allowing you to iterate on configurations, create
+ minimal configurations, and create configuration files for different
+ machines without having to duplicate your process.
+
+ The ``merge_config.sh`` script is part of the Linux Yocto kernel Git
+ repositories (i.e. ``linux-yocto-3.14``, ``linux-yocto-3.10``,
+ ``linux-yocto-3.8``, and so forth) in the ``scripts/kconfig``
+ directory.
+
+ For more information on configuration fragments, see the
+ ":ref:`kernel-dev/common:creating configuration fragments`"
+ section in the Yocto Project Linux Kernel Development Manual.
+
+- ``bitbake -u taskexp -g bitbake_target``: Using the BitBake command
+ with these options brings up a Dependency Explorer from which you can
+ view file dependencies. Understanding these dependencies allows you
+ to make informed decisions when cutting out various pieces of the
+ kernel and root filesystem.
+
+Trim the Root Filesystem
+------------------------
+
+The root filesystem is made up of packages for booting, libraries, and
+applications. To change things, you can configure how the packaging
+happens, which changes the way you build them. You can also modify the
+filesystem itself or select a different filesystem.
+
+First, find out what is hogging your root filesystem by running the
+``dirsize.py`` script from your root directory::
+
+ $ cd root-directory-of-image
+ $ dirsize.py 100000 > dirsize-100k.log
+ $ cat dirsize-100k.log
+
+You can apply a filter to the script to ignore files
+under a certain size. The previous example filters out any files below
+100 Kbytes. The sizes reported by the tool are uncompressed, and thus
+will be smaller by a relatively constant factor in a compressed root
+filesystem. When you examine your log file, you can focus on areas of
+the root filesystem that take up large amounts of memory.
+
+You need to be sure that what you eliminate does not cripple the
+functionality you need. One way to see how packages relate to each other
+is by using the Dependency Explorer UI with the BitBake command::
+
+ $ cd image-directory
+ $ bitbake -u taskexp -g image
+
+Use the interface to
+select potential packages you wish to eliminate and see their dependency
+relationships.
+
+When deciding how to reduce the size, get rid of packages that result in
+minimal impact on the feature set. For example, you might not need a VGA
+display. Or, you might be able to get by with ``devtmpfs`` and ``mdev``
+instead of ``udev``.
+
+Use your ``local.conf`` file to make changes. For example, to eliminate
+``udev`` and ``glib``, set the following in the local configuration
+file::
+
+ VIRTUAL-RUNTIME_dev_manager = ""
+
+Finally, you should consider exactly the type of root filesystem you
+need to meet your needs while also reducing its size. For example,
+consider ``cramfs``, ``squashfs``, ``ubifs``, ``ext2``, or an
+:term:`Initramfs` using ``initramfs``. Be aware that ``ext3`` requires a 1
+Mbyte journal. If you are okay with running read-only, you do not need
+this journal.
+
+.. note::
+
+ After each round of elimination, you need to rebuild your system and
+ then use the tools to see the effects of your reductions.
+
+Trim the Kernel
+---------------
+
+The kernel is built by including policies for hardware-independent
+aspects. What subsystems do you enable? For what architecture are you
+building? Which drivers do you build by default?
+
+.. note::
+
+ You can modify the kernel source if you want to help with boot time.
+
+Run the ``ksize.py`` script from the top-level Linux build directory to
+get an idea of what is making up the kernel::
+
+ $ cd top-level-linux-build-directory
+ $ ksize.py > ksize.log
+ $ cat ksize.log
+
+When you examine the log, you will see how much space is taken up with
+the built-in ``.o`` files for drivers, networking, core kernel files,
+filesystem, sound, and so forth. The sizes reported by the tool are
+uncompressed, and thus will be smaller by a relatively constant factor
+in a compressed kernel image. Look to reduce the areas that are large
+and taking up around the "90% rule."
+
+To examine, or drill down, into any particular area, use the ``-d``
+option with the script::
+
+ $ ksize.py -d > ksize.log
+
+Using this option
+breaks out the individual file information for each area of the kernel
+(e.g. drivers, networking, and so forth).
+
+Use your log file to see what you can eliminate from the kernel based on
+features you can let go. For example, if you are not going to need
+sound, you do not need any drivers that support sound.
+
+After figuring out what to eliminate, you need to reconfigure the kernel
+to reflect those changes during the next build. You could run
+``menuconfig`` and make all your changes at once. However, that makes it
+difficult to see the effects of your individual eliminations and also
+makes it difficult to replicate the changes for perhaps another target
+device. A better method is to start with no configurations using
+``allnoconfig``, create configuration fragments for individual changes,
+and then manage the fragments into a single configuration file using
+``merge_config.sh``. The tool makes it easy for you to iterate using the
+configuration change and build cycle.
+
+Each time you make configuration changes, you need to rebuild the kernel
+and check to see what impact your changes had on the overall size.
+
+Remove Package Management Requirements
+--------------------------------------
+
+Packaging requirements add size to the image. One way to reduce the size
+of the image is to remove all the packaging requirements from the image.
+This reduction includes both removing the package manager and its unique
+dependencies as well as removing the package management data itself.
+
+To eliminate all the packaging requirements for an image, be sure that
+"package-management" is not part of your
+:term:`IMAGE_FEATURES`
+statement for the image. When you remove this feature, you are removing
+the package manager as well as its dependencies from the root
+filesystem.
+
+Look for Other Ways to Minimize Size
+------------------------------------
+
+Depending on your particular circumstances, other areas that you can
+trim likely exist. The key to finding these areas is through tools and
+methods described here combined with experimentation and iteration. Here
+are a couple of areas to experiment with:
+
+- ``glibc``: In general, follow this process:
+
+ #. Remove ``glibc`` features from
+ :term:`DISTRO_FEATURES`
+ that you think you do not need.
+
+ #. Build your distribution.
+
+ #. If the build fails due to missing symbols in a package, determine
+ if you can reconfigure the package to not need those features. For
+ example, change the configuration to not support wide character
+ support as is done for ``ncurses``. Or, if support for those
+ characters is needed, determine what ``glibc`` features provide
+ the support and restore the configuration.
+
+ 4. Rebuild and repeat the process.
+
+- ``busybox``: For BusyBox, use a process similar as described for
+ ``glibc``. A difference is you will need to boot the resulting system
+ to see if you are able to do everything you expect from the running
+ system. You need to be sure to integrate configuration fragments into
+ Busybox because BusyBox handles its own core features and then allows
+ you to add configuration fragments on top.
+
+Iterate on the Process
+----------------------
+
+If you have not reached your goals on system size, you need to iterate
+on the process. The process is the same. Use the tools and see just what
+is taking up 90% of the root filesystem and the kernel. Decide what you
+can eliminate without limiting your device beyond what you need.
+
+Depending on your system, a good place to look might be Busybox, which
+provides a stripped down version of Unix tools in a single, executable
+file. You might be able to drop virtual terminal services or perhaps
+ipv6.
+
+Building Images for More than One Machine
+=========================================
+
+A common scenario developers face is creating images for several
+different machines that use the same software environment. In this
+situation, it is tempting to set the tunings and optimization flags for
+each build specifically for the targeted hardware (i.e. "maxing out" the
+tunings). Doing so can considerably add to build times and package feed
+maintenance collectively for the machines. For example, selecting tunes
+that are extremely specific to a CPU core used in a system might enable
+some micro optimizations in GCC for that particular system but would
+otherwise not gain you much of a performance difference across the other
+systems as compared to using a more general tuning across all the builds
+(e.g. setting :term:`DEFAULTTUNE`
+specifically for each machine's build). Rather than "max out" each
+build's tunings, you can take steps that cause the OpenEmbedded build
+system to reuse software across the various machines where it makes
+sense.
+
+If build speed and package feed maintenance are considerations, you
+should consider the points in this section that can help you optimize
+your tunings to best consider build times and package feed maintenance.
+
+- *Share the :term:`Build Directory`:* If at all possible, share the
+ :term:`TMPDIR` across builds. The Yocto Project supports switching between
+ different :term:`MACHINE` values in the same :term:`TMPDIR`. This practice
+ is well supported and regularly used by developers when building for
+ multiple machines. When you use the same :term:`TMPDIR` for multiple
+ machine builds, the OpenEmbedded build system can reuse the existing native
+ and often cross-recipes for multiple machines. Thus, build time decreases.
+
+ .. note::
+
+ If :term:`DISTRO` settings change or fundamental configuration settings
+ such as the filesystem layout, you need to work with a clean :term:`TMPDIR`.
+ Sharing :term:`TMPDIR` under these circumstances might work but since it is
+ not guaranteed, you should use a clean :term:`TMPDIR`.
+
+- *Enable the Appropriate Package Architecture:* By default, the
+ OpenEmbedded build system enables three levels of package
+ architectures: "all", "tune" or "package", and "machine". Any given
+ recipe usually selects one of these package architectures (types) for
+ its output. Depending for what a given recipe creates packages,
+ making sure you enable the appropriate package architecture can
+ directly impact the build time.
+
+ A recipe that just generates scripts can enable "all" architecture
+ because there are no binaries to build. To specifically enable "all"
+ architecture, be sure your recipe inherits the
+ :ref:`ref-classes-allarch` class.
+ This class is useful for "all" architectures because it configures
+ many variables so packages can be used across multiple architectures.
+
+ If your recipe needs to generate packages that are machine-specific
+ or when one of the build or runtime dependencies is already
+ machine-architecture dependent, which makes your recipe also
+ machine-architecture dependent, make sure your recipe enables the
+ "machine" package architecture through the
+ :term:`MACHINE_ARCH`
+ variable::
+
+ PACKAGE_ARCH = "${MACHINE_ARCH}"
+
+ When you do not
+ specifically enable a package architecture through the
+ :term:`PACKAGE_ARCH`, The
+ OpenEmbedded build system defaults to the
+ :term:`TUNE_PKGARCH` setting::
+
+ PACKAGE_ARCH = "${TUNE_PKGARCH}"
+
+- *Choose a Generic Tuning File if Possible:* Some tunes are more
+ generic and can run on multiple targets (e.g. an ``armv5`` set of
+ packages could run on ``armv6`` and ``armv7`` processors in most
+ cases). Similarly, ``i486`` binaries could work on ``i586`` and
+ higher processors. You should realize, however, that advances on
+ newer processor versions would not be used.
+
+ If you select the same tune for several different machines, the
+ OpenEmbedded build system reuses software previously built, thus
+ speeding up the overall build time. Realize that even though a new
+ sysroot for each machine is generated, the software is not recompiled
+ and only one package feed exists.
+
+- *Manage Granular Level Packaging:* Sometimes there are cases where
+ injecting another level of package architecture beyond the three
+ higher levels noted earlier can be useful. For example, consider how
+ NXP (formerly Freescale) allows for the easy reuse of binary packages
+ in their layer
+ :yocto_git:`meta-freescale </meta-freescale/>`.
+ In this example, the
+ :yocto_git:`fsl-dynamic-packagearch </meta-freescale/tree/classes/fsl-dynamic-packagearch.bbclass>`
+ class shares GPU packages for i.MX53 boards because all boards share
+ the AMD GPU. The i.MX6-based boards can do the same because all
+ boards share the Vivante GPU. This class inspects the BitBake
+ datastore to identify if the package provides or depends on one of
+ the sub-architecture values. If so, the class sets the
+ :term:`PACKAGE_ARCH` value
+ based on the ``MACHINE_SUBARCH`` value. If the package does not
+ provide or depend on one of the sub-architecture values but it
+ matches a value in the machine-specific filter, it sets
+ :term:`MACHINE_ARCH`. This
+ behavior reduces the number of packages built and saves build time by
+ reusing binaries.
+
+- *Use Tools to Debug Issues:* Sometimes you can run into situations
+ where software is being rebuilt when you think it should not be. For
+ example, the OpenEmbedded build system might not be using shared
+ state between machines when you think it should be. These types of
+ situations are usually due to references to machine-specific
+ variables such as :term:`MACHINE`,
+ :term:`SERIAL_CONSOLES`,
+ :term:`XSERVER`,
+ :term:`MACHINE_FEATURES`,
+ and so forth in code that is supposed to only be tune-specific or
+ when the recipe depends
+ (:term:`DEPENDS`,
+ :term:`RDEPENDS`,
+ :term:`RRECOMMENDS`,
+ :term:`RSUGGESTS`, and so forth)
+ on some other recipe that already has
+ :term:`PACKAGE_ARCH` defined
+ as "${MACHINE_ARCH}".
+
+ .. note::
+
+ Patches to fix any issues identified are most welcome as these
+ issues occasionally do occur.
+
+ For such cases, you can use some tools to help you sort out the
+ situation:
+
+ - ``state-diff-machines.sh``*:* You can find this tool in the
+ ``scripts`` directory of the Source Repositories. See the comments
+ in the script for information on how to use the tool.
+
+ - *BitBake's "-S printdiff" Option:* Using this option causes
+ BitBake to try to establish the most recent signature match
+ (e.g. in the shared state cache) and then compare matched signatures
+ to determine the stamps and delta where these two stamp trees diverge.
+
+Building Software from an External Source
+=========================================
+
+By default, the OpenEmbedded build system uses the :term:`Build Directory`
+when building source code. The build process involves fetching the source
+files, unpacking them, and then patching them if necessary before the build
+takes place.
+
+There are situations where you might want to build software from source
+files that are external to and thus outside of the OpenEmbedded build
+system. For example, suppose you have a project that includes a new BSP
+with a heavily customized kernel. And, you want to minimize exposing the
+build system to the development team so that they can focus on their
+project and maintain everyone's workflow as much as possible. In this
+case, you want a kernel source directory on the development machine
+where the development occurs. You want the recipe's
+:term:`SRC_URI` variable to point to
+the external directory and use it as is, not copy it.
+
+To build from software that comes from an external source, all you need to do
+is inherit the :ref:`ref-classes-externalsrc` class and then set
+the :term:`EXTERNALSRC` variable to point to your external source code. Here
+are the statements to put in your ``local.conf`` file::
+
+ INHERIT += "externalsrc"
+ EXTERNALSRC:pn-myrecipe = "path-to-your-source-tree"
+
+This next example shows how to accomplish the same thing by setting
+:term:`EXTERNALSRC` in the recipe itself or in the recipe's append file::
+
+ EXTERNALSRC = "path"
+ EXTERNALSRC_BUILD = "path"
+
+.. note::
+
+ In order for these settings to take effect, you must globally or
+ locally inherit the :ref:`ref-classes-externalsrc` class.
+
+By default, :ref:`ref-classes-externalsrc` builds the source code in a
+directory separate from the external source directory as specified by
+:term:`EXTERNALSRC`. If you need
+to have the source built in the same directory in which it resides, or
+some other nominated directory, you can set
+:term:`EXTERNALSRC_BUILD`
+to point to that directory::
+
+ EXTERNALSRC_BUILD:pn-myrecipe = "path-to-your-source-tree"
+
+Replicating a Build Offline
+===========================
+
+It can be useful to take a "snapshot" of upstream sources used in a
+build and then use that "snapshot" later to replicate the build offline.
+To do so, you need to first prepare and populate your downloads
+directory your "snapshot" of files. Once your downloads directory is
+ready, you can use it at any time and from any machine to replicate your
+build.
+
+Follow these steps to populate your Downloads directory:
+
+#. *Create a Clean Downloads Directory:* Start with an empty downloads
+ directory (:term:`DL_DIR`). You
+ start with an empty downloads directory by either removing the files
+ in the existing directory or by setting :term:`DL_DIR` to point to either
+ an empty location or one that does not yet exist.
+
+#. *Generate Tarballs of the Source Git Repositories:* Edit your
+ ``local.conf`` configuration file as follows::
+
+ DL_DIR = "/home/your-download-dir/"
+ BB_GENERATE_MIRROR_TARBALLS = "1"
+
+ During
+ the fetch process in the next step, BitBake gathers the source files
+ and creates tarballs in the directory pointed to by :term:`DL_DIR`. See
+ the
+ :term:`BB_GENERATE_MIRROR_TARBALLS`
+ variable for more information.
+
+#. *Populate Your Downloads Directory Without Building:* Use BitBake to
+ fetch your sources but inhibit the build::
+
+ $ bitbake target --runonly=fetch
+
+ The downloads directory (i.e. ``${DL_DIR}``) now has
+ a "snapshot" of the source files in the form of tarballs, which can
+ be used for the build.
+
+#. *Optionally Remove Any Git or other SCM Subdirectories From the
+ Downloads Directory:* If you want, you can clean up your downloads
+ directory by removing any Git or other Source Control Management
+ (SCM) subdirectories such as ``${DL_DIR}/git2/*``. The tarballs
+ already contain these subdirectories.
+
+Once your downloads directory has everything it needs regarding source
+files, you can create your "own-mirror" and build your target.
+Understand that you can use the files to build the target offline from
+any machine and at any time.
+
+Follow these steps to build your target using the files in the downloads
+directory:
+
+#. *Using Local Files Only:* Inside your ``local.conf`` file, add the
+ :term:`SOURCE_MIRROR_URL` variable, inherit the
+ :ref:`ref-classes-own-mirrors` class, and use the
+ :term:`BB_NO_NETWORK` variable to your ``local.conf``::
+
+ SOURCE_MIRROR_URL ?= "file:///home/your-download-dir/"
+ INHERIT += "own-mirrors"
+ BB_NO_NETWORK = "1"
+
+ The :term:`SOURCE_MIRROR_URL` and :ref:`ref-classes-own-mirrors`
+ class set up the system to use the downloads directory as your "own
+ mirror". Using the :term:`BB_NO_NETWORK` variable makes sure that
+ BitBake's fetching process in step 3 stays local, which means files
+ from your "own-mirror" are used.
+
+#. *Start With a Clean Build:* You can start with a clean build by
+ removing the ``${``\ :term:`TMPDIR`\ ``}`` directory or using a new
+ :term:`Build Directory`.
+
+#. *Build Your Target:* Use BitBake to build your target::
+
+ $ bitbake target
+
+ The build completes using the known local "snapshot" of source
+ files from your mirror. The resulting tarballs for your "snapshot" of
+ source files are in the downloads directory.
+
+ .. note::
+
+ The offline build does not work if recipes attempt to find the
+ latest version of software by setting
+ :term:`SRCREV` to
+ ``${``\ :term:`AUTOREV`\ ``}``::
+
+ SRCREV = "${AUTOREV}"
+
+ When a recipe sets :term:`SRCREV` to
+ ``${``\ :term:`AUTOREV`\ ``}``, the build system accesses the network in an
+ attempt to determine the latest version of software from the SCM.
+ Typically, recipes that use :term:`AUTOREV` are custom or modified
+ recipes. Recipes that reside in public repositories usually do not
+ use :term:`AUTOREV`.
+
+ If you do have recipes that use :term:`AUTOREV`, you can take steps to
+ still use the recipes in an offline build. Do the following:
+
+ #. Use a configuration generated by enabling :ref:`build
+ history <dev-manual/build-quality:maintaining build output quality>`.
+
+ #. Use the ``buildhistory-collect-srcrevs`` command to collect the
+ stored :term:`SRCREV` values from the build's history. For more
+ information on collecting these values, see the
+ ":ref:`dev-manual/build-quality:build history package information`"
+ section.
+
+ #. Once you have the correct source revisions, you can modify
+ those recipes to set :term:`SRCREV` to specific versions of the
+ software.
+
diff --git a/documentation/dev-manual/custom-distribution.rst b/documentation/dev-manual/custom-distribution.rst
new file mode 100644
index 0000000000..0bc386d606
--- /dev/null
+++ b/documentation/dev-manual/custom-distribution.rst
@@ -0,0 +1,135 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Creating Your Own Distribution
+******************************
+
+When you build an image using the Yocto Project and do not alter any
+distribution :term:`Metadata`, you are using the Poky distribution.
+Poky is explicitly a *reference* distribution for testing and
+development purposes. It enables most hardware and software features
+so that they can be tested, but this also means that from a security
+point of view the attack surface is very large. Additionally, at some
+point it is likely that you will want to gain more control over package
+alternative selections, compile-time options, and other low-level
+configurations. For both of these reasons, if you are using the Yocto
+Project for production use then you are strongly encouraged to create
+your own distribution.
+
+To create your own distribution, the basic steps consist of creating
+your own distribution layer, creating your own distribution
+configuration file, and then adding any needed code and Metadata to the
+layer. The following steps provide some more detail:
+
+- *Create a layer for your new distro:* Create your distribution layer
+ so that you can keep your Metadata and code for the distribution
+ separate. It is strongly recommended that you create and use your own
+ layer for configuration and code. Using your own layer as compared to
+ just placing configurations in a ``local.conf`` configuration file
+ makes it easier to reproduce the same build configuration when using
+ multiple build machines. See the
+ ":ref:`dev-manual/layers:creating a general layer using the \`\`bitbake-layers\`\` script`"
+ section for information on how to quickly set up a layer.
+
+- *Create the distribution configuration file:* The distribution
+ configuration file needs to be created in the ``conf/distro``
+ directory of your layer. You need to name it using your distribution
+ name (e.g. ``mydistro.conf``).
+
+ .. note::
+
+ The :term:`DISTRO` variable in your ``local.conf`` file determines the
+ name of your distribution.
+
+ You can split out parts of your configuration file into include files
+ and then "require" them from within your distribution configuration
+ file. Be sure to place the include files in the
+ ``conf/distro/include`` directory of your layer. A common example
+ usage of include files would be to separate out the selection of
+ desired version and revisions for individual recipes.
+
+ Your configuration file needs to set the following required
+ variables:
+
+ - :term:`DISTRO_NAME`
+
+ - :term:`DISTRO_VERSION`
+
+ These following variables are optional and you typically set them
+ from the distribution configuration file:
+
+ - :term:`DISTRO_FEATURES`
+
+ - :term:`DISTRO_EXTRA_RDEPENDS`
+
+ - :term:`DISTRO_EXTRA_RRECOMMENDS`
+
+ - :term:`TCLIBC`
+
+ .. tip::
+
+ If you want to base your distribution configuration file on the
+ very basic configuration from OE-Core, you can use
+ ``conf/distro/defaultsetup.conf`` as a reference and just include
+ variables that differ as compared to ``defaultsetup.conf``.
+ Alternatively, you can create a distribution configuration file
+ from scratch using the ``defaultsetup.conf`` file or configuration files
+ from another distribution such as Poky as a reference.
+
+- *Provide miscellaneous variables:* Be sure to define any other
+ variables for which you want to create a default or enforce as part
+ of the distribution configuration. You can include nearly any
+ variable from the ``local.conf`` file. The variables you use are not
+ limited to the list in the previous bulleted item.
+
+- *Point to Your distribution configuration file:* In your ``local.conf``
+ file in the :term:`Build Directory`, set your :term:`DISTRO` variable to
+ point to your distribution's configuration file. For example, if your
+ distribution's configuration file is named ``mydistro.conf``, then
+ you point to it as follows::
+
+ DISTRO = "mydistro"
+
+- *Add more to the layer if necessary:* Use your layer to hold other
+ information needed for the distribution:
+
+ - Add recipes for installing distro-specific configuration files
+ that are not already installed by another recipe. If you have
+ distro-specific configuration files that are included by an
+ existing recipe, you should add an append file (``.bbappend``) for
+ those. For general information and recommendations on how to add
+ recipes to your layer, see the
+ ":ref:`dev-manual/layers:creating your own layer`" and
+ ":ref:`dev-manual/layers:following best practices when creating layers`"
+ sections.
+
+ - Add any image recipes that are specific to your distribution.
+
+ - Add a ``psplash`` append file for a branded splash screen, using
+ the :term:`SPLASH_IMAGES` variable.
+
+ - Add any other append files to make custom changes that are
+ specific to individual recipes.
+
+ For information on append files, see the
+ ":ref:`dev-manual/layers:appending other layers metadata with your layer`"
+ section.
+
+Copying and modifying the Poky distribution
+===========================================
+
+Instead of creating a custom distribution from scratch as per above, you may
+wish to start your custom distribution configuration by copying the Poky
+distribution provided within the ``meta-poky`` layer and then modifying it.
+This is fine, however if you do this you should keep the following in mind:
+
+- Every reference to Poky needs to be updated in your copy so that it
+ will still apply. This includes override usage within files (e.g. ``:poky``)
+ and in directory names. This is a good opportunity to evaluate each one of
+ these customizations to see if they are needed for your use case.
+
+- Unless you also intend to use them, the ``poky-tiny``, ``poky-altcfg`` and
+ ``poky-bleeding`` variants and any references to them can be removed.
+
+- More generally, the Poky distribution configuration enables a lot more
+ than you likely need for your production use case. You should evaluate *every*
+ configuration choice made in your copy to determine if it is needed.
diff --git a/documentation/dev-manual/custom-template-configuration-directory.rst b/documentation/dev-manual/custom-template-configuration-directory.rst
new file mode 100644
index 0000000000..06fcada822
--- /dev/null
+++ b/documentation/dev-manual/custom-template-configuration-directory.rst
@@ -0,0 +1,52 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Creating a Custom Template Configuration Directory
+**************************************************
+
+If you are producing your own customized version of the build system for
+use by other users, you might want to provide a custom build configuration
+that includes all the necessary settings and layers (i.e. ``local.conf`` and
+``bblayers.conf`` that are created in a new :term:`Build Directory`) and a custom
+message that is shown when setting up the build. This can be done by
+creating one or more template configuration directories in your
+custom distribution layer.
+
+This can be done by using ``bitbake-layers save-build-conf``::
+
+ $ bitbake-layers save-build-conf ../../meta-alex/ test-1
+ NOTE: Starting bitbake server...
+ NOTE: Configuration template placed into /srv/work/alex/meta-alex/conf/templates/test-1
+ Please review the files in there, and particularly provide a configuration description in /srv/work/alex/meta-alex/conf/templates/test-1/conf-notes.txt
+ You can try out the configuration with
+ TEMPLATECONF=/srv/work/alex/meta-alex/conf/templates/test-1 . /srv/work/alex/poky/oe-init-build-env build-try-test-1
+
+The above command takes the config files from the currently active :term:`Build Directory` under ``conf``,
+replaces site-specific paths in ``bblayers.conf`` with ``##OECORE##``-relative paths, and copies
+the config files into a specified layer under a specified template name.
+
+To use those saved templates as a starting point for a build, users should point
+to one of them with :term:`TEMPLATECONF` environment variable::
+
+ TEMPLATECONF=/srv/work/alex/meta-alex/conf/templates/test-1 . /srv/work/alex/poky/oe-init-build-env build-try-test-1
+
+The OpenEmbedded build system uses the environment variable
+:term:`TEMPLATECONF` to locate the directory from which it gathers
+configuration information that ultimately ends up in the
+:term:`Build Directory` ``conf`` directory.
+
+If :term:`TEMPLATECONF` is not set, the default value is obtained
+from ``.templateconf`` file that is read from the same directory as
+``oe-init-build-env`` script. For the Poky reference distribution this
+would be::
+
+ TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf/templates/default}
+
+If you look at a configuration template directory, you will
+see the ``bblayers.conf.sample``, ``local.conf.sample``, ``conf-summary.txt`` and
+``conf-notes.txt`` files. The build system uses these files to form the
+respective ``bblayers.conf`` file, ``local.conf`` file, and show
+users usage information about the build they're setting up
+when running the ``oe-init-build-env`` setup script. These can be
+edited further if needed to improve or change the build configurations
+available to the users, and provide useful summaries and detailed usage notes.
+
diff --git a/documentation/dev-manual/customizing-images.rst b/documentation/dev-manual/customizing-images.rst
new file mode 100644
index 0000000000..5b18958ade
--- /dev/null
+++ b/documentation/dev-manual/customizing-images.rst
@@ -0,0 +1,223 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Customizing Images
+******************
+
+You can customize images to satisfy particular requirements. This
+section describes several methods and provides guidelines for each.
+
+Customizing Images Using ``local.conf``
+=======================================
+
+Probably the easiest way to customize an image is to add a package by
+way of the ``local.conf`` configuration file. Because it is limited to
+local use, this method generally only allows you to add packages and is
+not as flexible as creating your own customized image. When you add
+packages using local variables this way, you need to realize that these
+variable changes are in effect for every build and consequently affect
+all images, which might not be what you require.
+
+To add a package to your image using the local configuration file, use
+the :term:`IMAGE_INSTALL` variable with the ``:append`` operator::
+
+ IMAGE_INSTALL:append = " strace"
+
+Use of the syntax is important; specifically, the leading space
+after the opening quote and before the package name, which is
+``strace`` in this example. This space is required since the ``:append``
+operator does not add the space.
+
+Furthermore, you must use ``:append`` instead of the ``+=`` operator if
+you want to avoid ordering issues. The reason for this is because doing
+so unconditionally appends to the variable and avoids ordering problems
+due to the variable being set in image recipes and ``.bbclass`` files
+with operators like ``?=``. Using ``:append`` ensures the operation
+takes effect.
+
+As shown in its simplest use, ``IMAGE_INSTALL:append`` affects all
+images. It is possible to extend the syntax so that the variable applies
+to a specific image only. Here is an example::
+
+ IMAGE_INSTALL:append:pn-core-image-minimal = " strace"
+
+This example adds ``strace`` to the ``core-image-minimal`` image only.
+
+You can add packages using a similar approach through the
+:term:`CORE_IMAGE_EXTRA_INSTALL` variable. If you use this variable, only
+``core-image-*`` images are affected.
+
+Customizing Images Using Custom ``IMAGE_FEATURES`` and ``EXTRA_IMAGE_FEATURES``
+===============================================================================
+
+Another method for customizing your image is to enable or disable
+high-level image features by using the
+:term:`IMAGE_FEATURES` and
+:term:`EXTRA_IMAGE_FEATURES`
+variables. Although the functions for both variables are nearly
+equivalent, best practices dictate using :term:`IMAGE_FEATURES` from within
+a recipe and using :term:`EXTRA_IMAGE_FEATURES` from within your
+``local.conf`` file, which is found in the :term:`Build Directory`.
+
+To understand how these features work, the best reference is
+:ref:`meta/classes-recipe/image.bbclass <ref-classes-image>`.
+This class lists out the available
+:term:`IMAGE_FEATURES` of which most map to package groups while some, such
+as ``debug-tweaks`` and ``read-only-rootfs``, resolve as general
+configuration settings.
+
+In summary, the file looks at the contents of the :term:`IMAGE_FEATURES`
+variable and then maps or configures the feature accordingly. Based on
+this information, the build system automatically adds the appropriate
+packages or configurations to the
+:term:`IMAGE_INSTALL` variable.
+Effectively, you are enabling extra features by extending the class or
+creating a custom class for use with specialized image ``.bb`` files.
+
+Use the :term:`EXTRA_IMAGE_FEATURES` variable from within your local
+configuration file. Using a separate area from which to enable features
+with this variable helps you avoid overwriting the features in the image
+recipe that are enabled with :term:`IMAGE_FEATURES`. The value of
+:term:`EXTRA_IMAGE_FEATURES` is added to :term:`IMAGE_FEATURES` within
+``meta/conf/bitbake.conf``.
+
+To illustrate how you can use these variables to modify your image,
+consider an example that selects the SSH server. The Yocto Project ships
+with two SSH servers you can use with your images: Dropbear and OpenSSH.
+Dropbear is a minimal SSH server appropriate for resource-constrained
+environments, while OpenSSH is a well-known standard SSH server
+implementation. By default, the ``core-image-sato`` image is configured
+to use Dropbear. The ``core-image-full-cmdline`` and ``core-image-lsb``
+images both include OpenSSH. The ``core-image-minimal`` image does not
+contain an SSH server.
+
+You can customize your image and change these defaults. Edit the
+:term:`IMAGE_FEATURES` variable in your recipe or use the
+:term:`EXTRA_IMAGE_FEATURES` in your ``local.conf`` file so that it
+configures the image you are working with to include
+``ssh-server-dropbear`` or ``ssh-server-openssh``.
+
+.. note::
+
+ See the ":ref:`ref-manual/features:image features`" section in the Yocto
+ Project Reference Manual for a complete list of image features that ship
+ with the Yocto Project.
+
+Customizing Images Using Custom .bb Files
+=========================================
+
+You can also customize an image by creating a custom recipe that defines
+additional software as part of the image. The following example shows
+the form for the two lines you need::
+
+ IMAGE_INSTALL = "packagegroup-core-x11-base package1 package2"
+ inherit core-image
+
+Defining the software using a custom recipe gives you total control over
+the contents of the image. It is important to use the correct names of
+packages in the :term:`IMAGE_INSTALL` variable. You must use the
+OpenEmbedded notation and not the Debian notation for the names (e.g.
+``glibc-dev`` instead of ``libc6-dev``).
+
+The other method for creating a custom image is to base it on an
+existing image. For example, if you want to create an image based on
+``core-image-sato`` but add the additional package ``strace`` to the
+image, copy the ``meta/recipes-sato/images/core-image-sato.bb`` to a new
+``.bb`` and add the following line to the end of the copy::
+
+ IMAGE_INSTALL += "strace"
+
+Customizing Images Using Custom Package Groups
+==============================================
+
+For complex custom images, the best approach for customizing an image is
+to create a custom package group recipe that is used to build the image
+or images. A good example of a package group recipe is
+``meta/recipes-core/packagegroups/packagegroup-base.bb``.
+
+If you examine that recipe, you see that the :term:`PACKAGES` variable lists
+the package group packages to produce. The ``inherit packagegroup``
+statement sets appropriate default values and automatically adds
+``-dev``, ``-dbg``, and ``-ptest`` complementary packages for each
+package specified in the :term:`PACKAGES` statement.
+
+.. note::
+
+ The ``inherit packagegroup`` line should be located near the top of the
+ recipe, certainly before the :term:`PACKAGES` statement.
+
+For each package you specify in :term:`PACKAGES`, you can use :term:`RDEPENDS`
+and :term:`RRECOMMENDS` entries to provide a list of packages the parent
+task package should contain. You can see examples of these further down
+in the ``packagegroup-base.bb`` recipe.
+
+Here is a short, fabricated example showing the same basic pieces for a
+hypothetical packagegroup defined in ``packagegroup-custom.bb``, where
+the variable :term:`PN` is the standard way to abbreviate the reference to
+the full packagegroup name ``packagegroup-custom``::
+
+ DESCRIPTION = "My Custom Package Groups"
+
+ inherit packagegroup
+
+ PACKAGES = "\
+ ${PN}-apps \
+ ${PN}-tools \
+ "
+
+ RDEPENDS:${PN}-apps = "\
+ dropbear \
+ portmap \
+ psplash"
+
+ RDEPENDS:${PN}-tools = "\
+ oprofile \
+ oprofileui-server \
+ lttng-tools"
+
+ RRECOMMENDS:${PN}-tools = "\
+ kernel-module-oprofile"
+
+In the previous example, two package group packages are created with
+their dependencies and their recommended package dependencies listed:
+``packagegroup-custom-apps``, and ``packagegroup-custom-tools``. To
+build an image using these package group packages, you need to add
+``packagegroup-custom-apps`` and/or ``packagegroup-custom-tools`` to
+:term:`IMAGE_INSTALL`. For other forms of image dependencies see the other
+areas of this section.
+
+Customizing an Image Hostname
+=============================
+
+By default, the configured hostname (i.e. ``/etc/hostname``) in an image
+is the same as the machine name. For example, if
+:term:`MACHINE` equals "qemux86", the
+configured hostname written to ``/etc/hostname`` is "qemux86".
+
+You can customize this name by altering the value of the "hostname"
+variable in the ``base-files`` recipe using either an append file or a
+configuration file. Use the following in an append file::
+
+ hostname = "myhostname"
+
+Use the following in a configuration file::
+
+ hostname:pn-base-files = "myhostname"
+
+Changing the default value of the variable "hostname" can be useful in
+certain situations. For example, suppose you need to do extensive
+testing on an image and you would like to easily identify the image
+under test from existing images with typical default hostnames. In this
+situation, you could change the default hostname to "testme", which
+results in all the images using the name "testme". Once testing is
+complete and you do not need to rebuild the image for test any longer,
+you can easily reset the default hostname.
+
+Another point of interest is that if you unset the variable, the image
+will have no default hostname in the filesystem. Here is an example that
+unsets the variable in a configuration file::
+
+ hostname:pn-base-files = ""
+
+Having no default hostname in the filesystem is suitable for
+environments that use dynamic hostnames such as virtual machines.
+
diff --git a/documentation/dev-manual/debugging.rst b/documentation/dev-manual/debugging.rst
new file mode 100644
index 0000000000..92458a0c37
--- /dev/null
+++ b/documentation/dev-manual/debugging.rst
@@ -0,0 +1,1271 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Debugging Tools and Techniques
+******************************
+
+The exact method for debugging build failures depends on the nature of
+the problem and on the system's area from which the bug originates.
+Standard debugging practices such as comparison against the last known
+working version with examination of the changes and the re-application
+of steps to identify the one causing the problem are valid for the Yocto
+Project just as they are for any other system. Even though it is
+impossible to detail every possible potential failure, this section
+provides some general tips to aid in debugging given a variety of
+situations.
+
+.. note::
+
+ A useful feature for debugging is the error reporting tool.
+ Configuring the Yocto Project to use this tool causes the
+ OpenEmbedded build system to produce error reporting commands as part
+ of the console output. You can enter the commands after the build
+ completes to log error information into a common database, that can
+ help you figure out what might be going wrong. For information on how
+ to enable and use this feature, see the
+ ":ref:`dev-manual/error-reporting-tool:using the error reporting tool`"
+ section.
+
+The following list shows the debugging topics in the remainder of this
+section:
+
+- ":ref:`dev-manual/debugging:viewing logs from failed tasks`" describes
+ how to find and view logs from tasks that failed during the build
+ process.
+
+- ":ref:`dev-manual/debugging:viewing variable values`" describes how to
+ use the BitBake ``-e`` option to examine variable values after a
+ recipe has been parsed.
+
+- ":ref:`dev-manual/debugging:viewing package information with \`\`oe-pkgdata-util\`\``"
+ describes how to use the ``oe-pkgdata-util`` utility to query
+ :term:`PKGDATA_DIR` and
+ display package-related information for built packages.
+
+- ":ref:`dev-manual/debugging:viewing dependencies between recipes and tasks`"
+ describes how to use the BitBake ``-g`` option to display recipe
+ dependency information used during the build.
+
+- ":ref:`dev-manual/debugging:viewing task variable dependencies`" describes
+ how to use the ``bitbake-dumpsig`` command in conjunction with key
+ subdirectories in the :term:`Build Directory` to determine variable
+ dependencies.
+
+- ":ref:`dev-manual/debugging:running specific tasks`" describes
+ how to use several BitBake options (e.g. ``-c``, ``-C``, and ``-f``)
+ to run specific tasks in the build chain. It can be useful to run
+ tasks "out-of-order" when trying isolate build issues.
+
+- ":ref:`dev-manual/debugging:general BitBake problems`" describes how
+ to use BitBake's ``-D`` debug output option to reveal more about what
+ BitBake is doing during the build.
+
+- ":ref:`dev-manual/debugging:building with no dependencies`"
+ describes how to use the BitBake ``-b`` option to build a recipe
+ while ignoring dependencies.
+
+- ":ref:`dev-manual/debugging:recipe logging mechanisms`"
+ describes how to use the many recipe logging functions to produce
+ debugging output and report errors and warnings.
+
+- ":ref:`dev-manual/debugging:debugging parallel make races`"
+ describes how to debug situations where the build consists of several
+ parts that are run simultaneously and when the output or result of
+ one part is not ready for use with a different part of the build that
+ depends on that output.
+
+- ":ref:`dev-manual/debugging:debugging with the gnu project debugger (gdb) remotely`"
+ describes how to use GDB to allow you to examine running programs, which can
+ help you fix problems.
+
+- ":ref:`dev-manual/debugging:debugging with the gnu project debugger (gdb) on the target`"
+ describes how to use GDB directly on target hardware for debugging.
+
+- ":ref:`dev-manual/debugging:other debugging tips`" describes
+ miscellaneous debugging tips that can be useful.
+
+Viewing Logs from Failed Tasks
+==============================
+
+You can find the log for a task in the file
+``${``\ :term:`WORKDIR`\ ``}/temp/log.do_``\ `taskname`.
+For example, the log for the
+:ref:`ref-tasks-compile` task of the
+QEMU minimal image for the x86 machine (``qemux86``) might be in
+``tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/temp/log.do_compile``.
+To see the commands :term:`BitBake` ran
+to generate a log, look at the corresponding ``run.do_``\ `taskname` file
+in the same directory.
+
+``log.do_``\ `taskname` and ``run.do_``\ `taskname` are actually symbolic
+links to ``log.do_``\ `taskname`\ ``.``\ `pid` and
+``log.run_``\ `taskname`\ ``.``\ `pid`, where `pid` is the PID the task had
+when it ran. The symlinks always point to the files corresponding to the
+most recent run.
+
+Viewing Variable Values
+=======================
+
+Sometimes you need to know the value of a variable as a result of
+BitBake's parsing step. This could be because some unexpected behavior
+occurred in your project. Perhaps an attempt to :ref:`modify a variable
+<bitbake-user-manual/bitbake-user-manual-metadata:modifying existing
+variables>` did not work out as expected.
+
+BitBake's ``-e`` option is used to display variable values after
+parsing. The following command displays the variable values after the
+configuration files (i.e. ``local.conf``, ``bblayers.conf``,
+``bitbake.conf`` and so forth) have been parsed::
+
+ $ bitbake -e
+
+The following command displays variable values after a specific recipe has
+been parsed. The variables include those from the configuration as well::
+
+ $ bitbake -e recipename
+
+.. note::
+
+ Each recipe has its own private set of variables (datastore).
+ Internally, after parsing the configuration, a copy of the resulting
+ datastore is made prior to parsing each recipe. This copying implies
+ that variables set in one recipe will not be visible to other
+ recipes.
+
+ Likewise, each task within a recipe gets a private datastore based on
+ the recipe datastore, which means that variables set within one task
+ will not be visible to other tasks.
+
+In the output of ``bitbake -e``, each variable is preceded by a
+description of how the variable got its value, including temporary
+values that were later overridden. This description also includes
+variable flags (varflags) set on the variable. The output can be very
+helpful during debugging.
+
+Variables that are exported to the environment are preceded by
+``export`` in the output of ``bitbake -e``. See the following example::
+
+ export CC="i586-poky-linux-gcc -m32 -march=i586 --sysroot=/home/ulf/poky/build/tmp/sysroots/qemux86"
+
+In addition to variable values, the output of the ``bitbake -e`` and
+``bitbake -e`` recipe commands includes the following information:
+
+- The output starts with a tree listing all configuration files and
+ classes included globally, recursively listing the files they include
+ or inherit in turn. Much of the behavior of the OpenEmbedded build
+ system (including the behavior of the :ref:`ref-manual/tasks:normal recipe build tasks`) is
+ implemented in the :ref:`ref-classes-base` class and the
+ classes it inherits, rather than being built into BitBake itself.
+
+- After the variable values, all functions appear in the output. For
+ shell functions, variables referenced within the function body are
+ expanded. If a function has been modified using overrides or using
+ override-style operators like ``:append`` and ``:prepend``, then the
+ final assembled function body appears in the output.
+
+Viewing Package Information with ``oe-pkgdata-util``
+====================================================
+
+You can use the ``oe-pkgdata-util`` command-line utility to query
+:term:`PKGDATA_DIR` and display
+various package-related information. When you use the utility, you must
+use it to view information on packages that have already been built.
+
+Here are a few of the available ``oe-pkgdata-util`` subcommands.
+
+.. note::
+
+ You can use the standard \* and ? globbing wildcards as part of
+ package names and paths.
+
+- ``oe-pkgdata-util list-pkgs [pattern]``: Lists all packages
+ that have been built, optionally limiting the match to packages that
+ match pattern.
+
+- ``oe-pkgdata-util list-pkg-files package ...``: Lists the
+ files and directories contained in the given packages.
+
+ .. note::
+
+ A different way to view the contents of a package is to look at
+ the
+ ``${``\ :term:`WORKDIR`\ ``}/packages-split``
+ directory of the recipe that generates the package. This directory
+ is created by the
+ :ref:`ref-tasks-package` task
+ and has one subdirectory for each package the recipe generates,
+ which contains the files stored in that package.
+
+ If you want to inspect the ``${WORKDIR}/packages-split``
+ directory, make sure that :ref:`ref-classes-rm-work` is not
+ enabled when you build the recipe.
+
+- ``oe-pkgdata-util find-path path ...``: Lists the names of
+ the packages that contain the given paths. For example, the following
+ tells us that ``/usr/share/man/man1/make.1`` is contained in the
+ ``make-doc`` package::
+
+ $ oe-pkgdata-util find-path /usr/share/man/man1/make.1
+ make-doc: /usr/share/man/man1/make.1
+
+- ``oe-pkgdata-util lookup-recipe package ...``: Lists the name
+ of the recipes that produce the given packages.
+
+For more information on the ``oe-pkgdata-util`` command, use the help
+facility::
+
+ $ oe-pkgdata-util --help
+ $ oe-pkgdata-util subcommand --help
+
+Viewing Dependencies Between Recipes and Tasks
+==============================================
+
+Sometimes it can be hard to see why BitBake wants to build other recipes
+before the one you have specified. Dependency information can help you
+understand why a recipe is built.
+
+To generate dependency information for a recipe, run the following
+command::
+
+ $ bitbake -g recipename
+
+This command writes the following files in the current directory:
+
+- ``pn-buildlist``: A list of recipes/targets involved in building
+ `recipename`. "Involved" here means that at least one task from the
+ recipe needs to run when building `recipename` from scratch. Targets
+ that are in
+ :term:`ASSUME_PROVIDED`
+ are not listed.
+
+- ``task-depends.dot``: A graph showing dependencies between tasks.
+
+The graphs are in :wikipedia:`DOT <DOT_%28graph_description_language%29>`
+format and can be converted to images (e.g. using the ``dot`` tool from
+`Graphviz <https://www.graphviz.org/>`__).
+
+.. note::
+
+ - DOT files use a plain text format. The graphs generated using the
+ ``bitbake -g`` command are often so large as to be difficult to
+ read without special pruning (e.g. with BitBake's ``-I`` option)
+ and processing. Despite the form and size of the graphs, the
+ corresponding ``.dot`` files can still be possible to read and
+ provide useful information.
+
+ As an example, the ``task-depends.dot`` file contains lines such
+ as the following::
+
+ "libxslt.do_configure" -> "libxml2.do_populate_sysroot"
+
+ The above example line reveals that the
+ :ref:`ref-tasks-configure`
+ task in ``libxslt`` depends on the
+ :ref:`ref-tasks-populate_sysroot`
+ task in ``libxml2``, which is a normal
+ :term:`DEPENDS` dependency
+ between the two recipes.
+
+ - For an example of how ``.dot`` files can be processed, see the
+ ``scripts/contrib/graph-tool`` Python script, which finds and
+ displays paths between graph nodes.
+
+You can use a different method to view dependency information by using
+either::
+
+ $ bitbake -g -u taskexp recipename
+
+or::
+
+ $ bitbake -g -u taskexp_ncurses recipename
+
+The ``-u taskdep`` option GUI window from which you can view build-time and
+runtime dependencies for the recipes involved in building recipename. The
+``-u taskexp_ncurses`` option uses ncurses instead of GTK to render the UI.
+
+Viewing Task Variable Dependencies
+==================================
+
+As mentioned in the
+":ref:`bitbake-user-manual/bitbake-user-manual-execution:checksums (signatures)`"
+section of the BitBake User Manual, BitBake tries to automatically determine
+what variables a task depends on so that it can rerun the task if any values of
+the variables change. This determination is usually reliable. However, if you
+do things like construct variable names at runtime, then you might have to
+manually declare dependencies on those variables using ``vardeps`` as described
+in the ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:variable flags`"
+section of the BitBake User Manual.
+
+If you are unsure whether a variable dependency is being picked up
+automatically for a given task, you can list the variable dependencies
+BitBake has determined by doing the following:
+
+#. Build the recipe containing the task::
+
+ $ bitbake recipename
+
+#. Inside the :term:`STAMPS_DIR`
+ directory, find the signature data (``sigdata``) file that
+ corresponds to the task. The ``sigdata`` files contain a pickled
+ Python database of all the metadata that went into creating the input
+ checksum for the task. As an example, for the
+ :ref:`ref-tasks-fetch` task of the
+ ``db`` recipe, the ``sigdata`` file might be found in the following
+ location::
+
+ ${BUILDDIR}/tmp/stamps/i586-poky-linux/db/6.0.30-r1.do_fetch.sigdata.7c048c18222b16ff0bcee2000ef648b1
+
+ For tasks that are accelerated through the shared state
+ (:ref:`sstate <overview-manual/concepts:shared state cache>`) cache, an
+ additional ``siginfo`` file is written into
+ :term:`SSTATE_DIR` along with
+ the cached task output. The ``siginfo`` files contain exactly the
+ same information as ``sigdata`` files.
+
+#. Run ``bitbake-dumpsig`` on the ``sigdata`` or ``siginfo`` file. Here
+ is an example::
+
+ $ bitbake-dumpsig ${BUILDDIR}/tmp/stamps/i586-poky-linux/db/6.0.30-r1.do_fetch.sigdata.7c048c18222b16ff0bcee2000ef648b1
+
+ In the output of the above command, you will find a line like the
+ following, which lists all the (inferred) variable dependencies for
+ the task. This list also includes indirect dependencies from
+ variables depending on other variables, recursively::
+
+ Task dependencies: ['PV', 'SRCREV', 'SRC_URI', 'SRC_URI[sha256sum]', 'base_do_fetch']
+
+ .. note::
+
+ Functions (e.g. ``base_do_fetch``) also count as variable dependencies.
+ These functions in turn depend on the variables they reference.
+
+ The output of ``bitbake-dumpsig`` also includes the value each
+ variable had, a list of dependencies for each variable, and
+ :term:`BB_BASEHASH_IGNORE_VARS`
+ information.
+
+Debugging signature construction and unexpected task executions
+===============================================================
+
+There is a ``bitbake-diffsigs`` command for comparing two
+``siginfo`` or ``sigdata`` files. This command can be helpful when
+trying to figure out what changed between two versions of a task. If you
+call ``bitbake-diffsigs`` with just one file, the command behaves like
+``bitbake-dumpsig``.
+
+You can also use BitBake to dump out the signature construction
+information without executing tasks by using either of the following
+BitBake command-line options::
+
+ ‐‐dump-signatures=SIGNATURE_HANDLER
+ -S SIGNATURE_HANDLER
+
+
+.. note::
+
+ Two common values for `SIGNATURE_HANDLER` are "none" and "printdiff", which
+ dump only the signature or compare the dumped signature with the most recent one,
+ respectively. "printdiff" will try to establish the most recent
+ signature match (e.g. in the sstate cache) and then
+ compare the matched signatures to determine the stamps and delta
+ where these two stamp trees diverge. This can be used to determine why
+ tasks need to be re-run in situations where that is not expected.
+
+Using BitBake with either of these options causes BitBake to dump out
+``sigdata`` files in the ``stamps`` directory for every task it would
+have executed instead of building the specified target package.
+
+Viewing Metadata Used to Create the Input Signature of a Shared State Task
+==========================================================================
+
+Seeing what metadata went into creating the input signature of a shared
+state (sstate) task can be a useful debugging aid. This information is
+available in signature information (``siginfo``) files in
+:term:`SSTATE_DIR`. For
+information on how to view and interpret information in ``siginfo``
+files, see the
+":ref:`dev-manual/debugging:viewing task variable dependencies`" section.
+
+For conceptual information on shared state, see the
+":ref:`overview-manual/concepts:shared state`"
+section in the Yocto Project Overview and Concepts Manual.
+
+Invalidating Shared State to Force a Task to Run
+================================================
+
+The OpenEmbedded build system uses
+:ref:`checksums <overview-manual/concepts:checksums (signatures)>` and
+:ref:`overview-manual/concepts:shared state` cache to avoid unnecessarily
+rebuilding tasks. Collectively, this scheme is known as "shared state
+code".
+
+As with all schemes, this one has some drawbacks. It is possible that
+you could make implicit changes to your code that the checksum
+calculations do not take into account. These implicit changes affect a
+task's output but do not trigger the shared state code into rebuilding a
+recipe. Consider an example during which a tool changes its output.
+Assume that the output of ``rpmdeps`` changes. The result of the change
+should be that all the ``package`` and ``package_write_rpm`` shared
+state cache items become invalid. However, because the change to the
+output is external to the code and therefore implicit, the associated
+shared state cache items do not become invalidated. In this case, the
+build process uses the cached items rather than running the task again.
+Obviously, these types of implicit changes can cause problems.
+
+To avoid these problems during the build, you need to understand the
+effects of any changes you make. Realize that changes you make directly
+to a function are automatically factored into the checksum calculation.
+Thus, these explicit changes invalidate the associated area of shared
+state cache. However, you need to be aware of any implicit changes that
+are not obvious changes to the code and could affect the output of a
+given task.
+
+When you identify an implicit change, you can easily take steps to
+invalidate the cache and force the tasks to run. The steps you can take
+are as simple as changing a function's comments in the source code. For
+example, to invalidate package shared state files, change the comment
+statements of
+:ref:`ref-tasks-package` or the
+comments of one of the functions it calls. Even though the change is
+purely cosmetic, it causes the checksum to be recalculated and forces
+the build system to run the task again.
+
+.. note::
+
+ For an example of a commit that makes a cosmetic change to invalidate
+ shared state, see this
+ :yocto_git:`commit </poky/commit/meta/classes/package.bbclass?id=737f8bbb4f27b4837047cb9b4fbfe01dfde36d54>`.
+
+Running Specific Tasks
+======================
+
+Any given recipe consists of a set of tasks. The standard BitBake
+behavior in most cases is: :ref:`ref-tasks-fetch`, :ref:`ref-tasks-unpack`, :ref:`ref-tasks-patch`,
+:ref:`ref-tasks-configure`, :ref:`ref-tasks-compile`, :ref:`ref-tasks-install`, :ref:`ref-tasks-package`,
+:ref:`do_package_write_* <ref-tasks-package_write_deb>`, and :ref:`ref-tasks-build`. The default task is
+:ref:`ref-tasks-build` and any tasks on which it depends build first. Some tasks,
+such as :ref:`ref-tasks-devshell`, are not part of the default build chain. If you
+wish to run a task that is not part of the default build chain, you can
+use the ``-c`` option in BitBake. Here is an example::
+
+ $ bitbake matchbox-desktop -c devshell
+
+The ``-c`` option respects task dependencies, which means that all other
+tasks (including tasks from other recipes) that the specified task
+depends on will be run before the task. Even when you manually specify a
+task to run with ``-c``, BitBake will only run the task if it considers
+it "out of date". See the
+":ref:`overview-manual/concepts:stamp files and the rerunning of tasks`"
+section in the Yocto Project Overview and Concepts Manual for how
+BitBake determines whether a task is "out of date".
+
+If you want to force an up-to-date task to be rerun (e.g. because you
+made manual modifications to the recipe's
+:term:`WORKDIR` that you want to try
+out), then you can use the ``-f`` option.
+
+.. note::
+
+ The reason ``-f`` is never required when running the
+ :ref:`ref-tasks-devshell` task is because the
+ [\ :ref:`nostamp <bitbake-user-manual/bitbake-user-manual-metadata:variable flags>`\ ]
+ variable flag is already set for the task.
+
+The following example shows one way you can use the ``-f`` option::
+
+ $ bitbake matchbox-desktop
+ .
+ .
+ make some changes to the source code in the work directory
+ .
+ .
+ $ bitbake matchbox-desktop -c compile -f
+ $ bitbake matchbox-desktop
+
+This sequence first builds and then recompiles ``matchbox-desktop``. The
+last command reruns all tasks (basically the packaging tasks) after the
+compile. BitBake recognizes that the :ref:`ref-tasks-compile` task was rerun and
+therefore understands that the other tasks also need to be run again.
+
+Another, shorter way to rerun a task and all
+:ref:`ref-manual/tasks:normal recipe build tasks`
+that depend on it is to use the ``-C`` option.
+
+.. note::
+
+ This option is upper-cased and is separate from the ``-c``
+ option, which is lower-cased.
+
+Using this option invalidates the given task and then runs the
+:ref:`ref-tasks-build` task, which is
+the default task if no task is given, and the tasks on which it depends.
+You could replace the final two commands in the previous example with
+the following single command::
+
+ $ bitbake matchbox-desktop -C compile
+
+Internally, the ``-f`` and ``-C`` options work by tainting (modifying)
+the input checksum of the specified task. This tainting indirectly
+causes the task and its dependent tasks to be rerun through the normal
+task dependency mechanisms.
+
+.. note::
+
+ BitBake explicitly keeps track of which tasks have been tainted in
+ this fashion, and will print warnings such as the following for
+ builds involving such tasks:
+
+ .. code-block:: none
+
+ WARNING: /home/ulf/poky/meta/recipes-sato/matchbox-desktop/matchbox-desktop_2.1.bb.do_compile is tainted from a forced run
+
+
+ The purpose of the warning is to let you know that the work directory
+ and build output might not be in the clean state they would be in for
+ a "normal" build, depending on what actions you took. To get rid of
+ such warnings, you can remove the work directory and rebuild the
+ recipe, as follows::
+
+ $ bitbake matchbox-desktop -c clean
+ $ bitbake matchbox-desktop
+
+
+You can view a list of tasks in a given package by running the
+:ref:`ref-tasks-listtasks` task as follows::
+
+ $ bitbake matchbox-desktop -c listtasks
+
+The results appear as output to the console and are also in
+the file ``${WORKDIR}/temp/log.do_listtasks``.
+
+General BitBake Problems
+========================
+
+You can see debug output from BitBake by using the ``-D`` option. The
+debug output gives more information about what BitBake is doing and the
+reason behind it. Each ``-D`` option you use increases the logging
+level. The most common usage is ``-DDD``.
+
+The output from ``bitbake -DDD -v targetname`` can reveal why BitBake
+chose a certain version of a package or why BitBake picked a certain
+provider. This command could also help you in a situation where you
+think BitBake did something unexpected.
+
+Building with No Dependencies
+=============================
+
+To build a specific recipe (``.bb`` file), you can use the following
+command form::
+
+ $ bitbake -b somepath/somerecipe.bb
+
+This command form does
+not check for dependencies. Consequently, you should use it only when
+you know existing dependencies have been met.
+
+.. note::
+
+ You can also specify fragments of the filename. In this case, BitBake
+ checks for a unique match.
+
+Recipe Logging Mechanisms
+=========================
+
+The Yocto Project provides several logging functions for producing
+debugging output and reporting errors and warnings. For Python
+functions, the following logging functions are available. All of these functions
+log to ``${T}/log.do_``\ `task`, and can also log to standard output
+(stdout) with the right settings:
+
+- ``bb.plain(msg)``: Writes msg as is to the log while also
+ logging to stdout.
+
+- ``bb.note(msg)``: Writes "NOTE: msg" to the log. Also logs to
+ stdout if BitBake is called with "-v".
+
+- ``bb.debug(level, msg)``: Writes "DEBUG: msg" to the log. Also logs to
+ stdout if the log level is greater than or equal to level. See the
+ ":ref:`bitbake-user-manual/bitbake-user-manual-intro:usage and syntax`"
+ option in the BitBake User Manual for more information.
+
+- ``bb.warn(msg)``: Writes "WARNING: msg" to the log while also
+ logging to stdout.
+
+- ``bb.error(msg)``: Writes "ERROR: msg" to the log while also
+ logging to standard out (stdout).
+
+ .. note::
+
+ Calling this function does not cause the task to fail.
+
+- ``bb.fatal(msg)``: This logging function is similar to
+ ``bb.error(msg)`` but also causes the calling task to fail.
+
+ .. note::
+
+ ``bb.fatal()`` raises an exception, which means you do not need to put a
+ "return" statement after the function.
+
+The same logging functions are also available in shell functions, under
+the names ``bbplain``, ``bbnote``, ``bbdebug``, ``bbwarn``, ``bberror``,
+and ``bbfatal``. The :ref:`ref-classes-logging` class
+implements these functions. See that class in the ``meta/classes``
+folder of the :term:`Source Directory` for information.
+
+Logging With Python
+-------------------
+
+When creating recipes using Python and inserting code that handles build
+logs, keep in mind the goal is to have informative logs while keeping
+the console as "silent" as possible. Also, if you want status messages
+in the log, use the "debug" loglevel.
+
+Here is an example written in Python. The code handles logging for
+a function that determines the number of tasks needed to be run. See the
+":ref:`ref-tasks-listtasks`"
+section for additional information::
+
+ python do_listtasks() {
+ bb.debug(2, "Starting to figure out the task list")
+ if noteworthy_condition:
+ bb.note("There are 47 tasks to run")
+ bb.debug(2, "Got to point xyz")
+ if warning_trigger:
+ bb.warn("Detected warning_trigger, this might be a problem later.")
+ if recoverable_error:
+ bb.error("Hit recoverable_error, you really need to fix this!")
+ if fatal_error:
+ bb.fatal("fatal_error detected, unable to print the task list")
+ bb.plain("The tasks present are abc")
+ bb.debug(2, "Finished figuring out the tasklist")
+ }
+
+Logging With Bash
+-----------------
+
+When creating recipes using Bash and inserting code that handles build
+logs, you have the same goals --- informative with minimal console output.
+The syntax you use for recipes written in Bash is similar to that of
+recipes written in Python described in the previous section.
+
+Here is an example written in Bash. The code logs the progress of
+the ``do_my_function`` function::
+
+ do_my_function() {
+ bbdebug 2 "Running do_my_function"
+ if [ exceptional_condition ]; then
+ bbnote "Hit exceptional_condition"
+ fi
+ bbdebug 2 "Got to point xyz"
+ if [ warning_trigger ]; then
+ bbwarn "Detected warning_trigger, this might cause a problem later."
+ fi
+ if [ recoverable_error ]; then
+ bberror "Hit recoverable_error, correcting"
+ fi
+ if [ fatal_error ]; then
+ bbfatal "fatal_error detected"
+ fi
+ bbdebug 2 "Completed do_my_function"
+ }
+
+
+Debugging Parallel Make Races
+=============================
+
+A parallel ``make`` race occurs when the build consists of several parts
+that are run simultaneously and a situation occurs when the output or
+result of one part is not ready for use with a different part of the
+build that depends on that output. Parallel make races are annoying and
+can sometimes be difficult to reproduce and fix. However, there are some simple
+tips and tricks that can help you debug and fix them. This section
+presents a real-world example of an error encountered on the Yocto
+Project autobuilder and the process used to fix it.
+
+.. note::
+
+ If you cannot properly fix a ``make`` race condition, you can work around it
+ by clearing either the :term:`PARALLEL_MAKE` or :term:`PARALLEL_MAKEINST`
+ variables.
+
+The Failure
+-----------
+
+For this example, assume that you are building an image that depends on
+the "neard" package. And, during the build, BitBake runs into problems
+and creates the following output.
+
+.. note::
+
+ This example log file has longer lines artificially broken to make
+ the listing easier to read.
+
+If you examine the output or the log file, you see the failure during
+``make``:
+
+.. code-block:: none
+
+ | DEBUG: SITE files ['endian-little', 'bit-32', 'ix86-common', 'common-linux', 'common-glibc', 'i586-linux', 'common']
+ | DEBUG: Executing shell function do_compile
+ | NOTE: make -j 16
+ | make --no-print-directory all-am
+ | /bin/mkdir -p include/near
+ | /bin/mkdir -p include/near
+ | /bin/mkdir -p include/near
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/types.h include/near/types.h
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/log.h include/near/log.h
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/plugin.h include/near/plugin.h
+ | /bin/mkdir -p include/near
+ | /bin/mkdir -p include/near
+ | /bin/mkdir -p include/near
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/tag.h include/near/tag.h
+ | /bin/mkdir -p include/near
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/adapter.h include/near/adapter.h
+ | /bin/mkdir -p include/near
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/ndef.h include/near/ndef.h
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/tlv.h include/near/tlv.h
+ | /bin/mkdir -p include/near
+ | /bin/mkdir -p include/near
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/setting.h include/near/setting.h
+ | /bin/mkdir -p include/near
+ | /bin/mkdir -p include/near
+ | /bin/mkdir -p include/near
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/device.h include/near/device.h
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/nfc_copy.h include/near/nfc_copy.h
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/snep.h include/near/snep.h
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/version.h include/near/version.h
+ | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
+ 0.14-r0/neard-0.14/include/dbus.h include/near/dbus.h
+ | ./src/genbuiltin nfctype1 nfctype2 nfctype3 nfctype4 p2p > src/builtin.h
+ | i586-poky-linux-gcc -m32 -march=i586 --sysroot=/home/pokybuild/yocto-autobuilder/nightly-x86/
+ build/build/tmp/sysroots/qemux86 -DHAVE_CONFIG_H -I. -I./include -I./src -I./gdbus -I/home/pokybuild/
+ yocto-autobuilder/nightly-x86/build/build/tmp/sysroots/qemux86/usr/include/glib-2.0
+ -I/home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/sysroots/qemux86/usr/
+ lib/glib-2.0/include -I/home/pokybuild/yocto-autobuilder/nightly-x86/build/build/
+ tmp/sysroots/qemux86/usr/include/dbus-1.0 -I/home/pokybuild/yocto-autobuilder/
+ nightly-x86/build/build/tmp/sysroots/qemux86/usr/lib/dbus-1.0/include -I/home/pokybuild/yocto-autobuilder/
+ nightly-x86/build/build/tmp/sysroots/qemux86/usr/include/libnl3
+ -DNEAR_PLUGIN_BUILTIN -DPLUGINDIR=\""/usr/lib/near/plugins"\"
+ -DCONFIGDIR=\""/etc/neard\"" -O2 -pipe -g -feliminate-unused-debug-types -c
+ -o tools/snep-send.o tools/snep-send.c
+ | In file included from tools/snep-send.c:16:0:
+ | tools/../src/near.h:41:23: fatal error: near/dbus.h: No such file or directory
+ | #include <near/dbus.h>
+ | ^
+ | compilation terminated.
+ | make[1]: *** [tools/snep-send.o] Error 1
+ | make[1]: *** Waiting for unfinished jobs....
+ | make: *** [all] Error 2
+ | ERROR: oe_runmake failed
+
+Reproducing the Error
+---------------------
+
+Because race conditions are intermittent, they do not manifest
+themselves every time you do the build. In fact, most times the build
+will complete without problems even though the potential race condition
+exists. Thus, once the error surfaces, you need a way to reproduce it.
+
+In this example, compiling the "neard" package is causing the problem.
+So the first thing to do is build "neard" locally. Before you start the
+build, set the
+:term:`PARALLEL_MAKE` variable
+in your ``local.conf`` file to a high number (e.g. "-j 20"). Using a
+high value for :term:`PARALLEL_MAKE` increases the chances of the race
+condition showing up::
+
+ $ bitbake neard
+
+Once the local build for "neard" completes, start a ``devshell`` build::
+
+ $ bitbake neard -c devshell
+
+For information on how to use a ``devshell``, see the
+":ref:`dev-manual/development-shell:using a development shell`" section.
+
+In the ``devshell``, do the following::
+
+ $ make clean
+ $ make tools/snep-send.o
+
+The ``devshell`` commands cause the failure to clearly
+be visible. In this case, there is a missing dependency for the ``neard``
+Makefile target. Here is some abbreviated, sample output with the
+missing dependency clearly visible at the end::
+
+ i586-poky-linux-gcc -m32 -march=i586 --sysroot=/home/scott-lenovo/......
+ .
+ .
+ .
+ tools/snep-send.c
+ In file included from tools/snep-send.c:16:0:
+ tools/../src/near.h:41:23: fatal error: near/dbus.h: No such file or directory
+ #include <near/dbus.h>
+ ^
+ compilation terminated.
+ make: *** [tools/snep-send.o] Error 1
+ $
+
+
+Creating a Patch for the Fix
+----------------------------
+
+Because there is a missing dependency for the Makefile target, you need
+to patch the ``Makefile.am`` file, which is generated from
+``Makefile.in``. You can use Quilt to create the patch::
+
+ $ quilt new parallelmake.patch
+ Patch patches/parallelmake.patch is now on top
+ $ quilt add Makefile.am
+ File Makefile.am added to patch patches/parallelmake.patch
+
+For more information on using Quilt, see the
+":ref:`dev-manual/quilt:using quilt in your workflow`" section.
+
+At this point you need to make the edits to ``Makefile.am`` to add the
+missing dependency. For our example, you have to add the following line
+to the file::
+
+ tools/snep-send.$(OBJEXT): include/near/dbus.h
+
+Once you have edited the file, use the ``refresh`` command to create the
+patch::
+
+ $ quilt refresh
+ Refreshed patch patches/parallelmake.patch
+
+Once the patch file is created, you need to add it back to the originating
+recipe folder. Here is an example assuming a top-level
+:term:`Source Directory` named ``poky``::
+
+ $ cp patches/parallelmake.patch poky/meta/recipes-connectivity/neard/neard
+
+The final thing you need to do to implement the fix in the build is to
+update the "neard" recipe (i.e. ``neard-0.14.bb``) so that the
+:term:`SRC_URI` statement includes
+the patch file. The recipe file is in the folder above the patch. Here
+is what the edited :term:`SRC_URI` statement would look like::
+
+ SRC_URI = "${KERNELORG_MIRROR}/linux/network/nfc/${BPN}-${PV}.tar.xz \
+ file://neard.in \
+ file://neard.service.in \
+ file://parallelmake.patch \
+ "
+
+With the patch complete and moved to the correct folder and the
+:term:`SRC_URI` statement updated, you can exit the ``devshell``::
+
+ $ exit
+
+Testing the Build
+-----------------
+
+With everything in place, you can get back to trying the build again
+locally::
+
+ $ bitbake neard
+
+This build should succeed.
+
+Now you can open up a ``devshell`` again and repeat the clean and make
+operations as follows::
+
+ $ bitbake neard -c devshell
+ $ make clean
+ $ make tools/snep-send.o
+
+The build should work without issue.
+
+As with all solved problems, if they originated upstream, you need to
+submit the fix for the recipe in OE-Core and upstream so that the
+problem is taken care of at its source. See the
+":doc:`../contributor-guide/submit-changes`" section for more information.
+
+Debugging With the GNU Project Debugger (GDB) Remotely
+======================================================
+
+GDB allows you to examine running programs, which in turn helps you to
+understand and fix problems. It also allows you to perform post-mortem
+style analysis of program crashes. GDB is available as a package within
+the Yocto Project and is installed in SDK images by default. See the
+":ref:`ref-manual/images:Images`" chapter in the Yocto
+Project Reference Manual for a description of these images. You can find
+information on GDB at https://sourceware.org/gdb/.
+
+.. note::
+
+ For best results, install debug (``-dbg``) packages for the applications you
+ are going to debug. Doing so makes extra debug symbols available that give
+ you more meaningful output.
+
+Sometimes, due to memory or disk space constraints, it is not possible
+to use GDB directly on the remote target to debug applications. These
+constraints arise because GDB needs to load the debugging information
+and the binaries of the process being debugged. Additionally, GDB needs
+to perform many computations to locate information such as function
+names, variable names and values, stack traces and so forth --- even
+before starting the debugging process. These extra computations place
+more load on the target system and can alter the characteristics of the
+program being debugged.
+
+To help get past the previously mentioned constraints, there are two
+methods you can use: running a debuginfod server and using gdbserver.
+
+Using the debuginfod server method
+----------------------------------
+
+``debuginfod`` from ``elfutils`` is a way to distribute ``debuginfo`` files.
+Running a ``debuginfod`` server makes debug symbols readily available,
+which means you don't need to download debugging information
+and the binaries of the process being debugged. You can just fetch
+debug symbols from the server.
+
+To run a ``debuginfod`` server, you need to do the following:
+
+- Ensure that ``debuginfod`` is present in :term:`DISTRO_FEATURES`
+ (it already is in ``OpenEmbedded-core`` defaults and ``poky`` reference distribution).
+ If not, set in your distro config file or in ``local.conf``::
+
+ DISTRO_FEATURES:append = " debuginfod"
+
+ This distro feature enables the server and client library in ``elfutils``,
+ and enables ``debuginfod`` support in clients (at the moment, ``gdb`` and ``binutils``).
+
+- Run the following commands to launch the ``debuginfod`` server on the host::
+
+ $ oe-debuginfod
+
+- To use ``debuginfod`` on the target, you need to know the ip:port where
+ ``debuginfod`` is listening on the host (port defaults to 8002), and export
+ that into the shell environment, for example in ``qemu``::
+
+ root@qemux86-64:~# export DEBUGINFOD_URLS="http://192.168.7.1:8002/"
+
+- Then debug info fetching should simply work when running the target ``gdb``,
+ ``readelf`` or ``objdump``, for example::
+
+ root@qemux86-64:~# gdb /bin/cat
+ ...
+ Reading symbols from /bin/cat...
+ Downloading separate debug info for /bin/cat...
+ Reading symbols from /home/root/.cache/debuginfod_client/923dc4780cfbc545850c616bffa884b6b5eaf322/debuginfo...
+
+- It's also possible to use ``debuginfod-find`` to just query the server::
+
+ root@qemux86-64:~# debuginfod-find debuginfo /bin/ls
+ /home/root/.cache/debuginfod_client/356edc585f7f82d46f94fcb87a86a3fe2d2e60bd/debuginfo
+
+
+Using the gdbserver method
+--------------------------
+
+gdbserver, which runs on the remote target and does not load any
+debugging information from the debugged process. Instead, a GDB instance
+processes the debugging information that is run on a remote computer -
+the host GDB. The host GDB then sends control commands to gdbserver to
+make it stop or start the debugged program, as well as read or write
+memory regions of that debugged program. All the debugging information
+loaded and processed as well as all the heavy debugging is done by the
+host GDB. Offloading these processes gives the gdbserver running on the
+target a chance to remain small and fast.
+
+Because the host GDB is responsible for loading the debugging
+information and for doing the necessary processing to make actual
+debugging happen, you have to make sure the host can access the
+unstripped binaries complete with their debugging information and also
+be sure the target is compiled with no optimizations. The host GDB must
+also have local access to all the libraries used by the debugged
+program. Because gdbserver does not need any local debugging
+information, the binaries on the remote target can remain stripped.
+However, the binaries must also be compiled without optimization so they
+match the host's binaries.
+
+To remain consistent with GDB documentation and terminology, the binary
+being debugged on the remote target machine is referred to as the
+"inferior" binary. For documentation on GDB see the `GDB
+site <https://sourceware.org/gdb/documentation/>`__.
+
+The following steps show you how to debug using the GNU project
+debugger.
+
+#. *Configure your build system to construct the companion debug
+ filesystem:*
+
+ In your ``local.conf`` file, set the following::
+
+ IMAGE_GEN_DEBUGFS = "1"
+ IMAGE_FSTYPES_DEBUGFS = "tar.bz2"
+
+ These options cause the
+ OpenEmbedded build system to generate a special companion filesystem
+ fragment, which contains the matching source and debug symbols to
+ your deployable filesystem. The build system does this by looking at
+ what is in the deployed filesystem, and pulling the corresponding
+ ``-dbg`` packages.
+
+ The companion debug filesystem is not a complete filesystem, but only
+ contains the debug fragments. This filesystem must be combined with
+ the full filesystem for debugging. Subsequent steps in this procedure
+ show how to combine the partial filesystem with the full filesystem.
+
+#. *Configure the system to include gdbserver in the target filesystem:*
+
+ Make the following addition in your ``local.conf`` file::
+
+ EXTRA_IMAGE_FEATURES:append = " tools-debug"
+
+ The change makes
+ sure the ``gdbserver`` package is included.
+
+#. *Build the environment:*
+
+ Use the following command to construct the image and the companion
+ Debug Filesystem::
+
+ $ bitbake image
+
+ Build the cross GDB component and
+ make it available for debugging. Build the SDK that matches the
+ image. Building the SDK is best for a production build that can be
+ used later for debugging, especially during long term maintenance::
+
+ $ bitbake -c populate_sdk image
+
+ Alternatively, you can build the minimal toolchain components that
+ match the target. Doing so creates a smaller than typical SDK and
+ only contains a minimal set of components with which to build simple
+ test applications, as well as run the debugger::
+
+ $ bitbake meta-toolchain
+
+ A final method is to build Gdb itself within the build system::
+
+ $ bitbake gdb-cross-<architecture>
+
+ Doing so produces a temporary copy of
+ ``cross-gdb`` you can use for debugging during development. While
+ this is the quickest approach, the two previous methods in this step
+ are better when considering long-term maintenance strategies.
+
+ .. note::
+
+ If you run ``bitbake gdb-cross``, the OpenEmbedded build system suggests
+ the actual image (e.g. ``gdb-cross-i586``). The suggestion is usually the
+ actual name you want to use.
+
+#. *Set up the* ``debugfs``\ *:*
+
+ Run the following commands to set up the ``debugfs``::
+
+ $ mkdir debugfs
+ $ cd debugfs
+ $ tar xvfj build-dir/tmp/deploy/images/machine/image.rootfs.tar.bz2
+ $ tar xvfj build-dir/tmp/deploy/images/machine/image-dbg.rootfs.tar.bz2
+
+#. *Set up GDB:*
+
+ Install the SDK (if you built one) and then source the correct
+ environment file. Sourcing the environment file puts the SDK in your
+ ``PATH`` environment variable and sets ``$GDB`` to the SDK's debugger.
+
+ If you are using the build system, Gdb is located in
+ `build-dir`\ ``/tmp/sysroots/``\ `host`\ ``/usr/bin/``\ `architecture`\ ``/``\ `architecture`\ ``-gdb``
+
+#. *Boot the target:*
+
+ For information on how to run QEMU, see the `QEMU
+ Documentation <https://wiki.qemu.org/Documentation/GettingStartedDevelopers>`__.
+
+ .. note::
+
+ Be sure to verify that your host can access the target via TCP.
+
+#. *Debug a program:*
+
+ Debugging a program involves running gdbserver on the target and then
+ running Gdb on the host. The example in this step debugs ``gzip``:
+
+ .. code-block:: shell
+
+ root@qemux86:~# gdbserver localhost:1234 /bin/gzip —help
+
+ For
+ additional gdbserver options, see the `GDB Server
+ Documentation <https://www.gnu.org/software/gdb/documentation/>`__.
+
+ After running gdbserver on the target, you need to run Gdb on the
+ host and configure it and connect to the target. Use these commands::
+
+ $ cd directory-holding-the-debugfs-directory
+ $ arch-gdb
+ (gdb) set sysroot debugfs
+ (gdb) set substitute-path /usr/src/debug debugfs/usr/src/debug
+ (gdb) target remote IP-of-target:1234
+
+ At this
+ point, everything should automatically load (i.e. matching binaries,
+ symbols and headers).
+
+ .. note::
+
+ The Gdb ``set`` commands in the previous example can be placed into the
+ users ``~/.gdbinit`` file. Upon starting, Gdb automatically runs whatever
+ commands are in that file.
+
+#. *Deploying without a full image rebuild:*
+
+ In many cases, during development you want a quick method to deploy a
+ new binary to the target and debug it, without waiting for a full
+ image build.
+
+ One approach to solving this situation is to just build the component
+ you want to debug. Once you have built the component, copy the
+ executable directly to both the target and the host ``debugfs``.
+
+ If the binary is processed through the debug splitting in
+ OpenEmbedded, you should also copy the debug items (i.e. ``.debug``
+ contents and corresponding ``/usr/src/debug`` files) from the work
+ directory. Here is an example::
+
+ $ bitbake bash
+ $ bitbake -c devshell bash
+ $ cd ..
+ $ scp packages-split/bash/bin/bash target:/bin/bash
+ $ cp -a packages-split/bash-dbg/\* path/debugfs
+
+Debugging with the GNU Project Debugger (GDB) on the Target
+===========================================================
+
+The previous section addressed using GDB remotely for debugging
+purposes, which is the most usual case due to the inherent hardware
+limitations on many embedded devices. However, debugging in the target
+hardware itself is also possible with more powerful devices. This
+section describes what you need to do in order to support using GDB to
+debug on the target hardware.
+
+To support this kind of debugging, you need do the following:
+
+- Ensure that GDB is on the target. You can do this by making
+ the following addition to your ``local.conf`` file::
+
+ EXTRA_IMAGE_FEATURES:append = " tools-debug"
+
+- Ensure that debug symbols are present. You can do so by adding the
+ corresponding ``-dbg`` package to :term:`IMAGE_INSTALL`::
+
+ IMAGE_INSTALL:append = " packagename-dbg"
+
+ Alternatively, you can add the following to ``local.conf`` to include
+ all the debug symbols::
+
+ EXTRA_IMAGE_FEATURES:append = " dbg-pkgs"
+
+.. note::
+
+ To improve the debug information accuracy, you can reduce the level
+ of optimization used by the compiler. For example, when adding the
+ following line to your ``local.conf`` file, you will reduce optimization
+ from :term:`FULL_OPTIMIZATION` of "-O2" to :term:`DEBUG_OPTIMIZATION`
+ of "-O -fno-omit-frame-pointer"::
+
+ DEBUG_BUILD = "1"
+
+ Consider that this will reduce the application's performance and is
+ recommended only for debugging purposes.
+
+Enabling Minidebuginfo
+======================
+
+Enabling the :term:`DISTRO_FEATURES` minidebuginfo adds a compressed ELF section ``.gnu_debugdata``
+to all binary files, containing only function names, and thus increasing the size of the
+binaries only by 5 to 10%. For comparison, full debug symbols can be 10 times as big as
+a stripped binary, and it is thus not always possible to deploy full debug symbols.
+Minidebuginfo data allows, on the one side, to retrieve a call-stack using
+GDB (command backtrace) without deploying full debug symbols to the target. It also
+allows to retrieve a symbolicated call-stack when using ``systemd-coredump`` to manage
+coredumps (commands ``coredumpctl list`` and ``coredumpctl info``).
+
+This feature was created by Fedora, see https://fedoraproject.org/wiki/Features/MiniDebugInfo for
+more details.
+
+Other Debugging Tips
+====================
+
+Here are some other tips that you might find useful:
+
+- When adding new packages, it is worth watching for undesirable items
+ making their way into compiler command lines. For example, you do not
+ want references to local system files like ``/usr/lib/`` or
+ ``/usr/include/``.
+
+- If you want to remove the ``psplash`` boot splashscreen, add
+ ``psplash=false`` to the kernel command line. Doing so prevents
+ ``psplash`` from loading and thus allows you to see the console. It
+ is also possible to switch out of the splashscreen by switching the
+ virtual console (e.g. Fn+Left or Fn+Right on a Zaurus).
+
+- Removing :term:`TMPDIR` (usually ``tmp/``, within the
+ :term:`Build Directory`) can often fix temporary build issues. Removing
+ :term:`TMPDIR` is usually a relatively cheap operation, because task output
+ will be cached in :term:`SSTATE_DIR` (usually ``sstate-cache/``, which is
+ also in the :term:`Build Directory`).
+
+ .. note::
+
+ Removing :term:`TMPDIR` might be a workaround rather than a fix.
+ Consequently, trying to determine the underlying cause of an issue before
+ removing the directory is a good idea.
+
+- Understanding how a feature is used in practice within existing
+ recipes can be very helpful. It is recommended that you configure
+ some method that allows you to quickly search through files.
+
+ Using GNU Grep, you can use the following shell function to
+ recursively search through common recipe-related files, skipping
+ binary files, ``.git`` directories, and the :term:`Build Directory`
+ (assuming its name starts with "build")::
+
+ g() {
+ grep -Ir \
+ --exclude-dir=.git \
+ --exclude-dir='build*' \
+ --include='*.bb*' \
+ --include='*.inc*' \
+ --include='*.conf*' \
+ --include='*.py*' \
+ "$@"
+ }
+
+ Here are some usage examples::
+
+ $ g FOO # Search recursively for "FOO"
+ $ g -i foo # Search recursively for "foo", ignoring case
+ $ g -w FOO # Search recursively for "FOO" as a word, ignoring e.g. "FOOBAR"
+
+ If figuring
+ out how some feature works requires a lot of searching, it might
+ indicate that the documentation should be extended or improved. In
+ such cases, consider filing a documentation bug using the Yocto
+ Project implementation of
+ :yocto_bugs:`Bugzilla <>`. For information on
+ how to submit a bug against the Yocto Project, see the Yocto Project
+ Bugzilla :yocto_wiki:`wiki page </Bugzilla_Configuration_and_Bug_Tracking>`
+ and the ":doc:`../contributor-guide/report-defect`" section.
+
+ .. note::
+
+ The manuals might not be the right place to document variables
+ that are purely internal and have a limited scope (e.g. internal
+ variables used to implement a single ``.bbclass`` file).
+
diff --git a/documentation/dev-manual/dev-manual-common-tasks.xml b/documentation/dev-manual/dev-manual-common-tasks.xml
deleted file mode 100644
index 8bb8612e0f..0000000000
--- a/documentation/dev-manual/dev-manual-common-tasks.xml
+++ /dev/null
@@ -1,16034 +0,0 @@
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
-"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
-[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
-
-<chapter id='extendpoky'>
-
-<title>Common Tasks</title>
- <para>
- This chapter describes fundamental procedures such as creating layers,
- adding new software packages, extending or customizing images,
- porting work to new hardware (adding a new machine), and so forth.
- You will find that the procedures documented here occur often in the
- development cycle using the Yocto Project.
- </para>
-
- <section id="understanding-and-creating-layers">
- <title>Understanding and Creating Layers</title>
-
- <para>
- The OpenEmbedded build system supports organizing
- <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink> into
- multiple layers.
- Layers allow you to isolate different types of customizations from
- each other.
- For introductory information on the Yocto Project Layer Model,
- see the
- "<ulink url='&YOCTO_DOCS_OM_URL;#the-yocto-project-layer-model'>The Yocto Project Layer Model</ulink>"
- section in the Yocto Project Overview and Concepts Manual.
- </para>
-
- <section id='creating-your-own-layer'>
- <title>Creating Your Own Layer</title>
-
- <para>
- It is very easy to create your own layers to use with the
- OpenEmbedded build system.
- The Yocto Project ships with tools that speed up creating
- layers.
- This section describes the steps you perform by hand to create
- layers so that you can better understand them.
- For information about the layer-creation tools, see the
- "<ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-bitbake-layers-script'>Creating a New BSP Layer Using the <filename>bitbake-layers</filename> Script</ulink>"
- section in the Yocto Project Board Support Package (BSP)
- Developer's Guide and the
- "<link linkend='creating-a-general-layer-using-the-bitbake-layers-script'>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</link>"
- section further down in this manual.
- </para>
-
- <para>
- Follow these general steps to create your layer without using
- tools:
- <orderedlist>
- <listitem><para>
- <emphasis>Check Existing Layers:</emphasis>
- Before creating a new layer, you should be sure someone
- has not already created a layer containing the Metadata
- you need.
- You can see the
- <ulink url='http://layers.openembedded.org/layerindex/layers/'>OpenEmbedded Metadata Index</ulink>
- for a list of layers from the OpenEmbedded community
- that can be used in the Yocto Project.
- You could find a layer that is identical or close to
- what you need.
- </para></listitem>
- <listitem><para>
- <emphasis>Create a Directory:</emphasis>
- Create the directory for your layer.
- When you create the layer, be sure to create the
- directory in an area not associated with the
- Yocto Project
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- (e.g. the cloned <filename>poky</filename> repository).
- </para>
-
- <para>While not strictly required, prepend the name of
- the directory with the string "meta-".
- For example:
- <literallayout class='monospaced'>
- meta-mylayer
- meta-GUI_xyz
- meta-mymachine
- </literallayout>
- With rare exceptions, a layer's name follows this
- form:
- <literallayout class='monospaced'>
- meta-<replaceable>root_name</replaceable>
- </literallayout>
- Following this layer naming convention can
- save you trouble later when tools, components, or
- variables "assume" your layer name begins with "meta-".
- A notable example is in configuration files as
- shown in the following step where layer names without
- the "meta-" string are appended
- to several variables used in the configuration.
- </para></listitem>
- <listitem><para id='dev-layer-config-file-description'>
- <emphasis>Create a Layer Configuration File:</emphasis>
- Inside your new layer folder, you need to create a
- <filename>conf/layer.conf</filename> file.
- It is easiest to take an existing layer configuration
- file and copy that to your layer's
- <filename>conf</filename> directory and then modify the
- file as needed.</para>
-
- <para>The
- <filename>meta-yocto-bsp/conf/layer.conf</filename> file
- in the Yocto Project
- <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/meta-yocto-bsp/conf'>Source Repositories</ulink>
- demonstrates the required syntax.
- For your layer, you need to replace "yoctobsp" with
- a unique identifier for your layer (e.g. "machinexyz"
- for a layer named "meta-machinexyz"):
- <literallayout class='monospaced'>
- # We have a conf and classes directory, add to BBPATH
- BBPATH .= ":${LAYERDIR}"
-
- # We have recipes-* directories, add to BBFILES
- BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
- ${LAYERDIR}/recipes-*/*/*.bbappend"
-
- BBFILE_COLLECTIONS += "yoctobsp"
- BBFILE_PATTERN_yoctobsp = "^${LAYERDIR}/"
- BBFILE_PRIORITY_yoctobsp = "5"
- LAYERVERSION_yoctobsp = "4"
- LAYERSERIES_COMPAT_yoctobsp = "&DISTRO_NAME_NO_CAP;"
- </literallayout>
- Following is an explanation of the layer configuration
- file:
- <itemizedlist>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBPATH'><filename>BBPATH</filename></ulink>:
- Adds the layer's root directory to BitBake's
- search path.
- Through the use of the
- <filename>BBPATH</filename> variable, BitBake
- locates class files
- (<filename>.bbclass</filename>),
- configuration files, and files that are
- included with <filename>include</filename> and
- <filename>require</filename> statements.
- For these cases, BitBake uses the first file
- that matches the name found in
- <filename>BBPATH</filename>.
- This is similar to the way the
- <filename>PATH</filename> variable is used for
- binaries.
- It is recommended, therefore, that you use
- unique class and configuration filenames in
- your custom layer.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBFILES'><filename>BBFILES</filename></ulink>:
- Defines the location for all recipes in the
- layer.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBFILE_COLLECTIONS'><filename>BBFILE_COLLECTIONS</filename></ulink>:
- Establishes the current layer through a
- unique identifier that is used throughout the
- OpenEmbedded build system to refer to the layer.
- In this example, the identifier "yoctobsp" is
- the representation for the container layer
- named "meta-yocto-bsp".
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBFILE_PATTERN'><filename>BBFILE_PATTERN</filename></ulink>:
- Expands immediately during parsing to
- provide the directory of the layer.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBFILE_PRIORITY'><filename>BBFILE_PRIORITY</filename></ulink>:
- Establishes a priority to use for
- recipes in the layer when the OpenEmbedded build
- finds recipes of the same name in different
- layers.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LAYERVERSION'><filename>LAYERVERSION</filename></ulink>:
- Establishes a version number for the layer.
- You can use this version number to specify this
- exact version of the layer as a dependency when
- using the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LAYERDEPENDS'><filename>LAYERDEPENDS</filename></ulink>
- variable.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LAYERDEPENDS'><filename>LAYERDEPENDS</filename></ulink>:
- Lists all layers on which this layer depends (if any).
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LAYERSERIES_COMPAT'><filename>LAYERSERIES_COMPAT</filename></ulink>:
- Lists the
- <ulink url='&YOCTO_WIKI_URL;/wiki/Releases'>Yocto Project</ulink>
- releases for which the current version is
- compatible.
- This variable is a good way to indicate if
- your particular layer is current.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Add Content:</emphasis>
- Depending on the type of layer, add the content.
- If the layer adds support for a machine, add the machine
- configuration in a <filename>conf/machine/</filename>
- file within the layer.
- If the layer adds distro policy, add the distro
- configuration in a <filename>conf/distro/</filename>
- file within the layer.
- If the layer introduces new recipes, put the recipes
- you need in <filename>recipes-*</filename>
- subdirectories within the layer.
- <note>
- For an explanation of layer hierarchy that
- is compliant with the Yocto Project, see
- the
- "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-filelayout'>Example Filesystem Layout</ulink>"
- section in the Yocto Project Board
- Support Package (BSP) Developer's Guide.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Optionally Test for Compatibility:</emphasis>
- If you want permission to use the Yocto Project
- Compatibility logo with your layer or application that
- uses your layer, perform the steps to apply for
- compatibility.
- See the
- "<link linkend='making-sure-your-layer-is-compatible-with-yocto-project'>Making Sure Your Layer is Compatible With Yocto Project</link>"
- section for more information.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='best-practices-to-follow-when-creating-layers'>
- <title>Following Best Practices When Creating Layers</title>
-
- <para>
- To create layers that are easier to maintain and that will
- not impact builds for other machines, you should consider the
- information in the following list:
- <itemizedlist>
- <listitem><para>
- <emphasis>Avoid "Overlaying" Entire Recipes from Other Layers in Your Configuration:</emphasis>
- In other words, do not copy an entire recipe into your
- layer and then modify it.
- Rather, use an append file
- (<filename>.bbappend</filename>) to override only those
- parts of the original recipe you need to modify.
- </para></listitem>
- <listitem><para>
- <emphasis>Avoid Duplicating Include Files:</emphasis>
- Use append files (<filename>.bbappend</filename>)
- for each recipe that uses an include file.
- Or, if you are introducing a new recipe that requires
- the included file, use the path relative to the
- original layer directory to refer to the file.
- For example, use
- <filename>require recipes-core/</filename><replaceable>package</replaceable><filename>/</filename><replaceable>file</replaceable><filename>.inc</filename>
- instead of
- <filename>require </filename><replaceable>file</replaceable><filename>.inc</filename>.
- If you're finding you have to overlay the include file,
- it could indicate a deficiency in the include file in
- the layer to which it originally belongs.
- If this is the case, you should try to address that
- deficiency instead of overlaying the include file.
- For example, you could address this by getting the
- maintainer of the include file to add a variable or
- variables to make it easy to override the parts needing
- to be overridden.
- </para></listitem>
- <listitem><para>
- <emphasis>Structure Your Layers:</emphasis>
- Proper use of overrides within append files and
- placement of machine-specific files within your layer
- can ensure that a build is not using the wrong Metadata
- and negatively impacting a build for a different
- machine.
- Following are some examples:
- <itemizedlist>
- <listitem><para>
- <emphasis>Modify Variables to Support a
- Different Machine:</emphasis>
- Suppose you have a layer named
- <filename>meta-one</filename> that adds support
- for building machine "one".
- To do so, you use an append file named
- <filename>base-files.bbappend</filename> and
- create a dependency on "foo" by altering the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
- variable:
- <literallayout class='monospaced'>
- DEPENDS = "foo"
- </literallayout>
- The dependency is created during any build that
- includes the layer
- <filename>meta-one</filename>.
- However, you might not want this dependency
- for all machines.
- For example, suppose you are building for
- machine "two" but your
- <filename>bblayers.conf</filename> file has the
- <filename>meta-one</filename> layer included.
- During the build, the
- <filename>base-files</filename> for machine
- "two" will also have the dependency on
- <filename>foo</filename>.</para>
- <para>To make sure your changes apply only when
- building machine "one", use a machine override
- with the <filename>DEPENDS</filename> statement:
- <literallayout class='monospaced'>
- DEPENDS_one = "foo"
- </literallayout>
- You should follow the same strategy when using
- <filename>_append</filename> and
- <filename>_prepend</filename> operations:
- <literallayout class='monospaced'>
- DEPENDS_append_one = " foo"
- DEPENDS_prepend_one = "foo "
- </literallayout>
- As an actual example, here's a snippet from the
- generic kernel include file
- <filename>linux-yocto.inc</filename>,
- wherein the kernel compile and link options are
- adjusted in the case of a subset of the supported
- architectures:
- <literallayout class='monospaced'>
- DEPENDS_append_aarch64 = " libgcc"
- KERNEL_CC_append_aarch64 = " ${TOOLCHAIN_OPTIONS}"
- KERNEL_LD_append_aarch64 = " ${TOOLCHAIN_OPTIONS}"
-
- DEPENDS_append_nios2 = " libgcc"
- KERNEL_CC_append_nios2 = " ${TOOLCHAIN_OPTIONS}"
- KERNEL_LD_append_nios2 = " ${TOOLCHAIN_OPTIONS}"
-
- DEPENDS_append_arc = " libgcc"
- KERNEL_CC_append_arc = " ${TOOLCHAIN_OPTIONS}"
- KERNEL_LD_append_arc = " ${TOOLCHAIN_OPTIONS}"
-
- KERNEL_FEATURES_append_qemuall=" features/debug/printk.scc"
- </literallayout>
- <note>
- Avoiding "+=" and "=+" and using
- machine-specific
- <filename>_append</filename>
- and <filename>_prepend</filename> operations
- is recommended as well.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Place Machine-Specific Files in
- Machine-Specific Locations:</emphasis>
- When you have a base recipe, such as
- <filename>base-files.bb</filename>, that
- contains a
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
- statement to a file, you can use an append file
- to cause the build to use your own version of
- the file.
- For example, an append file in your layer at
- <filename>meta-one/recipes-core/base-files/base-files.bbappend</filename>
- could extend
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
- using
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
- as follows:
- <literallayout class='monospaced'>
- FILESEXTRAPATHS_prepend := "${THISDIR}/${BPN}:"
- </literallayout>
- The build for machine "one" will pick up your
- machine-specific file as long as you have the
- file in
- <filename>meta-one/recipes-core/base-files/base-files/</filename>.
- However, if you are building for a different
- machine and the
- <filename>bblayers.conf</filename> file includes
- the <filename>meta-one</filename> layer and
- the location of your machine-specific file is
- the first location where that file is found
- according to <filename>FILESPATH</filename>,
- builds for all machines will also use that
- machine-specific file.</para>
- <para>You can make sure that a machine-specific
- file is used for a particular machine by putting
- the file in a subdirectory specific to the
- machine.
- For example, rather than placing the file in
- <filename>meta-one/recipes-core/base-files/base-files/</filename>
- as shown above, put it in
- <filename>meta-one/recipes-core/base-files/base-files/one/</filename>.
- Not only does this make sure the file is used
- only when building for machine "one", but the
- build process locates the file more quickly.</para>
- <para>In summary, you need to place all files
- referenced from <filename>SRC_URI</filename>
- in a machine-specific subdirectory within the
- layer in order to restrict those files to
- machine-specific builds.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Perform Steps to Apply for Yocto Project Compatibility:</emphasis>
- If you want permission to use the
- Yocto Project Compatibility logo with your layer
- or application that uses your layer, perform the
- steps to apply for compatibility.
- See the
- "<link linkend='making-sure-your-layer-is-compatible-with-yocto-project'>Making Sure Your Layer is Compatible With Yocto Project</link>"
- section for more information.
- </para></listitem>
- <listitem><para>
- <emphasis>Follow the Layer Naming Convention:</emphasis>
- Store custom layers in a Git repository that use the
- <filename>meta-<replaceable>layer_name</replaceable></filename>
- format.
- </para></listitem>
- <listitem><para>
- <emphasis>Group Your Layers Locally:</emphasis>
- Clone your repository alongside other cloned
- <filename>meta</filename> directories from the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='making-sure-your-layer-is-compatible-with-yocto-project'>
- <title>Making Sure Your Layer is Compatible With Yocto Project</title>
-
- <para>
- When you create a layer used with the Yocto Project, it is
- advantageous to make sure that the layer interacts well with
- existing Yocto Project layers (i.e. the layer is compatible
- with the Yocto Project).
- Ensuring compatibility makes the layer easy to be consumed
- by others in the Yocto Project community and could allow you
- permission to use the Yocto Project Compatible Logo.
- <note>
- Only Yocto Project member organizations are permitted to
- use the Yocto Project Compatible Logo.
- The logo is not available for general use.
- For information on how to become a Yocto Project member
- organization, see the
- <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>.
- </note>
- </para>
-
- <para>
- The Yocto Project Compatibility Program consists of a layer
- application process that requests permission to use the Yocto
- Project Compatibility Logo for your layer and application.
- The process consists of two parts:
- <orderedlist>
- <listitem><para>
- Successfully passing a script
- (<filename>yocto-check-layer</filename>) that
- when run against your layer, tests it against
- constraints based on experiences of how layers have
- worked in the real world and where pitfalls have been
- found.
- Getting a "PASS" result from the script is required for
- successful compatibility registration.
- </para></listitem>
- <listitem><para>
- Completion of an application acceptance form, which
- you can find at
- <ulink url='https://www.yoctoproject.org/webform/yocto-project-compatible-registration'></ulink>.
- </para></listitem>
- </orderedlist>
- </para>
-
- <para>
- To be granted permission to use the logo, you need to satisfy
- the following:
- <itemizedlist>
- <listitem><para>
- Be able to check the box indicating that you
- got a "PASS" when running the script against your
- layer.
- </para></listitem>
- <listitem><para>
- Answer "Yes" to the questions on the form or have an
- acceptable explanation for any questions answered "No".
- </para></listitem>
- <listitem><para>
- Be a Yocto Project Member Organization.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- The remainder of this section presents information on the
- registration form and on the
- <filename>yocto-check-layer</filename> script.
- </para>
-
- <section id='yocto-project-compatible-program-application'>
- <title>Yocto Project Compatible Program Application</title>
-
- <para>
- Use the form to apply for your layer's approval.
- Upon successful application, you can use the Yocto
- Project Compatibility Logo with your layer and the
- application that uses your layer.
- </para>
-
- <para>
- To access the form, use this link:
- <ulink url='https://www.yoctoproject.org/webform/yocto-project-compatible-registration'></ulink>.
- Follow the instructions on the form to complete your
- application.
- </para>
-
- <para>
- The application consists of the following sections:
- <itemizedlist>
- <listitem><para>
- <emphasis>Contact Information:</emphasis>
- Provide your contact information as the fields
- require.
- Along with your information, provide the
- released versions of the Yocto Project for which
- your layer is compatible.
- </para></listitem>
- <listitem><para>
- <emphasis>Acceptance Criteria:</emphasis>
- Provide "Yes" or "No" answers for each of the
- items in the checklist.
- Space exists at the bottom of the form for any
- explanations for items for which you answered "No".
- </para></listitem>
- <listitem><para>
- <emphasis>Recommendations:</emphasis>
- Provide answers for the questions regarding Linux
- kernel use and build success.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='yocto-check-layer-script'>
- <title><filename>yocto-check-layer</filename> Script</title>
-
- <para>
- The <filename>yocto-check-layer</filename> script
- provides you a way to assess how compatible your layer is
- with the Yocto Project.
- You should run this script prior to using the form to
- apply for compatibility as described in the previous
- section.
- You need to achieve a "PASS" result in order to have
- your application form successfully processed.
- </para>
-
- <para>
- The script divides tests into three areas: COMMON, BSP,
- and DISTRO.
- For example, given a distribution layer (DISTRO), the
- layer must pass both the COMMON and DISTRO related tests.
- Furthermore, if your layer is a BSP layer, the layer must
- pass the COMMON and BSP set of tests.
- </para>
-
- <para>
- To execute the script, enter the following commands from
- your build directory:
- <literallayout class='monospaced'>
- $ source oe-init-build-env
- $ yocto-check-layer <replaceable>your_layer_directory</replaceable>
- </literallayout>
- Be sure to provide the actual directory for your layer
- as part of the command.
- </para>
-
- <para>
- Entering the command causes the script to determine the
- type of layer and then to execute a set of specific
- tests against the layer.
- The following list overviews the test:
- <itemizedlist>
- <listitem><para>
- <filename>common.test_readme</filename>:
- Tests if a <filename>README</filename> file
- exists in the layer and the file is not empty.
- </para></listitem>
- <listitem><para>
- <filename>common.test_parse</filename>:
- Tests to make sure that BitBake can parse the
- files without error (i.e.
- <filename>bitbake -p</filename>).
- </para></listitem>
- <listitem><para>
- <filename>common.test_show_environment</filename>:
- Tests that the global or per-recipe environment
- is in order without errors (i.e.
- <filename>bitbake -e</filename>).
- </para></listitem>
- <listitem><para>
- <filename>common.test_world</filename>:
- Verifies that <filename>bitbake world</filename> works.
- </para></listitem>
- <listitem><para>
- <filename>common.test_signatures</filename>:
- Tests to be sure that BSP and DISTRO layers do not
- come with recipes that change signatures.
- </para></listitem>
- <listitem><para>
- <filename>common.test_layerseries_compat</filename>:
- Verifies layer compatibility is set properly.
- </para></listitem>
- <listitem><para>
- <filename>bsp.test_bsp_defines_machines</filename>:
- Tests if a BSP layer has machine configurations.
- </para></listitem>
- <listitem><para>
- <filename>bsp.test_bsp_no_set_machine</filename>:
- Tests to ensure a BSP layer does not set the
- machine when the layer is added.
- </para></listitem>
- <listitem><para>
- <filename>bsp.test_machine_world</filename>:
- Verifies that <filename>bitbake world</filename>
- works regardless of which machine is selected.
- </para></listitem>
- <listitem><para>
- <filename>bsp.test_machine_signatures</filename>:
- Verifies that building for a particular machine
- affects only the signature of tasks specific to that
- machine.
- </para></listitem>
- <listitem><para>
- <filename>distro.test_distro_defines_distros</filename>:
- Tests if a DISTRO layer has distro configurations.
- </para></listitem>
- <listitem><para>
- <filename>distro.test_distro_no_set_distros</filename>:
- Tests to ensure a DISTRO layer does not set the
- distribution when the layer is added.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
- </section>
-
- <section id='enabling-your-layer'>
- <title>Enabling Your Layer</title>
-
- <para>
- Before the OpenEmbedded build system can use your new layer,
- you need to enable it.
- To enable your layer, simply add your layer's path to the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'>BBLAYERS</ulink></filename>
- variable in your <filename>conf/bblayers.conf</filename> file,
- which is found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- The following example shows how to enable a layer named
- <filename>meta-mylayer</filename>:
- <literallayout class='monospaced'>
- # POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
- # changes incompatibly
- POKY_BBLAYERS_CONF_VERSION = "2"
-
- BBPATH = "${TOPDIR}"
- BBFILES ?= ""
-
- BBLAYERS ?= " \
- /home/<replaceable>user</replaceable>/poky/meta \
- /home/<replaceable>user</replaceable>/poky/meta-poky \
- /home/<replaceable>user</replaceable>/poky/meta-yocto-bsp \
- /home/<replaceable>user</replaceable>/poky/meta-mylayer \
- "
- </literallayout>
- </para>
-
- <para>
- BitBake parses each <filename>conf/layer.conf</filename> file
- from the top down as specified in the
- <filename>BBLAYERS</filename> variable
- within the <filename>conf/bblayers.conf</filename> file.
- During the processing of each
- <filename>conf/layer.conf</filename> file, BitBake adds the
- recipes, classes and configurations contained within the
- particular layer to the source directory.
- </para>
- </section>
-
- <section id='using-bbappend-files'>
- <title>Using .bbappend Files in Your Layer</title>
-
- <para>
- A recipe that appends Metadata to another recipe is called a
- BitBake append file.
- A BitBake append file uses the <filename>.bbappend</filename>
- file type suffix, while the corresponding recipe to which
- Metadata is being appended uses the <filename>.bb</filename>
- file type suffix.
- </para>
-
- <para>
- You can use a <filename>.bbappend</filename> file in your
- layer to make additions or changes to the content of another
- layer's recipe without having to copy the other layer's
- recipe into your layer.
- Your <filename>.bbappend</filename> file resides in your layer,
- while the main <filename>.bb</filename> recipe file to
- which you are appending Metadata resides in a different layer.
- </para>
-
- <para>
- Being able to append information to an existing recipe not only
- avoids duplication, but also automatically applies recipe
- changes from a different layer into your layer.
- If you were copying recipes, you would have to manually merge
- changes as they occur.
- </para>
-
- <para>
- When you create an append file, you must use the same root
- name as the corresponding recipe file.
- For example, the append file
- <filename>someapp_&DISTRO;.bbappend</filename> must apply to
- <filename>someapp_&DISTRO;.bb</filename>.
- This means the original recipe and append file names are
- version number-specific.
- If the corresponding recipe is renamed to update to a newer
- version, you must also rename and possibly update
- the corresponding <filename>.bbappend</filename> as well.
- During the build process, BitBake displays an error on starting
- if it detects a <filename>.bbappend</filename> file that does
- not have a corresponding recipe with a matching name.
- See the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BB_DANGLINGAPPENDS_WARNONLY'><filename>BB_DANGLINGAPPENDS_WARNONLY</filename></ulink>
- variable for information on how to handle this error.
- </para>
-
- <para>
- As an example, consider the main formfactor recipe and a
- corresponding formfactor append file both from the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
- Here is the main formfactor recipe, which is named
- <filename>formfactor_0.0.bb</filename> and located in the
- "meta" layer at
- <filename>meta/recipes-bsp/formfactor</filename>:
- <literallayout class='monospaced'>
- SUMMARY = "Device formfactor information"
- SECTION = "base"
- LICENSE = "MIT"
- LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
- PR = "r45"
-
- SRC_URI = "file://config file://machconfig"
- S = "${WORKDIR}"
-
- PACKAGE_ARCH = "${MACHINE_ARCH}"
- INHIBIT_DEFAULT_DEPS = "1"
-
- do_install() {
- # Install file only if it has contents
- install -d ${D}${sysconfdir}/formfactor/
- install -m 0644 ${S}/config ${D}${sysconfdir}/formfactor/
- if [ -s "${S}/machconfig" ]; then
- install -m 0644 ${S}/machconfig ${D}${sysconfdir}/formfactor/
- fi
- } </literallayout>
- In the main recipe, note the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
- variable, which tells the OpenEmbedded build system where to
- find files during the build.
- </para>
-
- <para>
- Following is the append file, which is named
- <filename>formfactor_0.0.bbappend</filename> and is from the
- Raspberry Pi BSP Layer named
- <filename>meta-raspberrypi</filename>.
- The file is in the layer at
- <filename>recipes-bsp/formfactor</filename>:
- <literallayout class='monospaced'>
- FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
- </literallayout>
- </para>
-
- <para>
- By default, the build system uses the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
- variable to locate files.
- This append file extends the locations by setting the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
- variable.
- Setting this variable in the <filename>.bbappend</filename>
- file is the most reliable and recommended method for adding
- directories to the search path used by the build system
- to find files.
- </para>
-
- <para>
- The statement in this example extends the directories to
- include
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-THISDIR'><filename>THISDIR</filename></ulink><filename>}/${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink><filename>}</filename>,
- which resolves to a directory named
- <filename>formfactor</filename> in the same directory
- in which the append file resides (i.e.
- <filename>meta-raspberrypi/recipes-bsp/formfactor</filename>.
- This implies that you must have the supporting directory
- structure set up that will contain any files or patches you
- will be including from the layer.
- </para>
-
- <para>
- Using the immediate expansion assignment operator
- <filename>:=</filename> is important because of the reference
- to <filename>THISDIR</filename>.
- The trailing colon character is important as it ensures that
- items in the list remain colon-separated.
- <note>
- <para>
- BitBake automatically defines the
- <filename>THISDIR</filename> variable.
- You should never set this variable yourself.
- Using "_prepend" as part of the
- <filename>FILESEXTRAPATHS</filename> ensures your path
- will be searched prior to other paths in the final
- list.
- </para>
-
- <para>
- Also, not all append files add extra files.
- Many append files simply exist to add build options
- (e.g. <filename>systemd</filename>).
- For these cases, your append file would not even
- use the <filename>FILESEXTRAPATHS</filename> statement.
- </para>
- </note>
- </para>
- </section>
-
- <section id='prioritizing-your-layer'>
- <title>Prioritizing Your Layer</title>
-
- <para>
- Each layer is assigned a priority value.
- Priority values control which layer takes precedence if there
- are recipe files with the same name in multiple layers.
- For these cases, the recipe file from the layer with a higher
- priority number takes precedence.
- Priority values also affect the order in which multiple
- <filename>.bbappend</filename> files for the same recipe are
- applied.
- You can either specify the priority manually, or allow the
- build system to calculate it based on the layer's dependencies.
- </para>
-
- <para>
- To specify the layer's priority manually, use the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBFILE_PRIORITY'><filename>BBFILE_PRIORITY</filename></ulink>
- variable and append the layer's root name:
- <literallayout class='monospaced'>
- BBFILE_PRIORITY_mylayer = "1"
- </literallayout>
- </para>
-
- <note>
- <para>It is possible for a recipe with a lower version number
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>
- in a layer that has a higher priority to take precedence.</para>
- <para>Also, the layer priority does not currently affect the
- precedence order of <filename>.conf</filename>
- or <filename>.bbclass</filename> files.
- Future versions of BitBake might address this.</para>
- </note>
- </section>
-
- <section id='managing-layers'>
- <title>Managing Layers</title>
-
- <para>
- You can use the BitBake layer management tool
- <filename>bitbake-layers</filename> to provide a view
- into the structure of recipes across a multi-layer project.
- Being able to generate output that reports on configured layers
- with their paths and priorities and on
- <filename>.bbappend</filename> files and their applicable
- recipes can help to reveal potential problems.
- </para>
-
- <para>
- For help on the BitBake layer management tool, use the
- following command:
- <literallayout class='monospaced'>
- $ bitbake-layers --help
- NOTE: Starting bitbake server...
- usage: bitbake-layers [-d] [-q] [-F] [--color COLOR] [-h] &lt;subcommand&gt; ...
-
- BitBake layers utility
-
- optional arguments:
- -d, --debug Enable debug output
- -q, --quiet Print only errors
- -F, --force Force add without recipe parse verification
- --color COLOR Colorize output (where COLOR is auto, always, never)
- -h, --help show this help message and exit
-
- subcommands:
- &lt;subcommand&gt;
- show-layers show current configured layers.
- show-overlayed list overlayed recipes (where the same recipe exists
- in another layer)
- show-recipes list available recipes, showing the layer they are
- provided by
- show-appends list bbappend files and recipe files they apply to
- show-cross-depends Show dependencies between recipes that cross layer
- boundaries.
- add-layer Add one or more layers to bblayers.conf.
- remove-layer Remove one or more layers from bblayers.conf.
- flatten flatten layer configuration into a separate output
- directory.
- layerindex-fetch Fetches a layer from a layer index along with its
- dependent layers, and adds them to conf/bblayers.conf.
- layerindex-show-depends
- Find layer dependencies from layer index.
- create-layer Create a basic layer
-
- Use bitbake-layers &lt;subcommand&gt; --help to get help on a specific command
- </literallayout>
- </para>
-
- <para>
- The following list describes the available commands:
- <itemizedlist>
- <listitem><para>
- <emphasis><filename>help:</filename></emphasis>
- Displays general help or help on a specified command.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>show-layers:</filename></emphasis>
- Shows the current configured layers.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>show-overlayed:</filename></emphasis>
- Lists overlayed recipes.
- A recipe is overlayed when a recipe with the same name
- exists in another layer that has a higher layer
- priority.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>show-recipes:</filename></emphasis>
- Lists available recipes and the layers that provide them.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>show-appends:</filename></emphasis>
- Lists <filename>.bbappend</filename> files and the
- recipe files to which they apply.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>show-cross-depends:</filename></emphasis>
- Lists dependency relationships between recipes that
- cross layer boundaries.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>add-layer:</filename></emphasis>
- Adds a layer to <filename>bblayers.conf</filename>.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>remove-layer:</filename></emphasis>
- Removes a layer from <filename>bblayers.conf</filename>
- </para></listitem>
- <listitem><para>
- <emphasis><filename>flatten:</filename></emphasis>
- Flattens the layer configuration into a separate output
- directory.
- Flattening your layer configuration builds a "flattened"
- directory that contains the contents of all layers,
- with any overlayed recipes removed and any
- <filename>.bbappend</filename> files appended to the
- corresponding recipes.
- You might have to perform some manual cleanup of the
- flattened layer as follows:
- <itemizedlist>
- <listitem><para>
- Non-recipe files (such as patches)
- are overwritten.
- The flatten command shows a warning for these
- files.
- </para></listitem>
- <listitem><para>
- Anything beyond the normal layer
- setup has been added to the
- <filename>layer.conf</filename> file.
- Only the lowest priority layer's
- <filename>layer.conf</filename> is used.
- </para></listitem>
- <listitem><para>
- Overridden and appended items from
- <filename>.bbappend</filename> files need to be
- cleaned up.
- The contents of each
- <filename>.bbappend</filename> end up in the
- flattened recipe.
- However, if there are appended or changed
- variable values, you need to tidy these up
- yourself.
- Consider the following example.
- Here, the <filename>bitbake-layers</filename>
- command adds the line
- <filename>#### bbappended ...</filename> so that
- you know where the following lines originate:
- <literallayout class='monospaced'>
- ...
- DESCRIPTION = "A useful utility"
- ...
- EXTRA_OECONF = "--enable-something"
- ...
-
- #### bbappended from meta-anotherlayer ####
-
- DESCRIPTION = "Customized utility"
- EXTRA_OECONF += "--enable-somethingelse"
- </literallayout>
- Ideally, you would tidy up these utilities as
- follows:
- <literallayout class='monospaced'>
- ...
- DESCRIPTION = "Customized utility"
- ...
- EXTRA_OECONF = "--enable-something --enable-somethingelse"
- ...
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis><filename>layerindex-fetch</filename>:</emphasis>
- Fetches a layer from a layer index, along with its
- dependent layers, and adds the layers to the
- <filename>conf/bblayers.conf</filename> file.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>layerindex-show-depends</filename>:</emphasis>
- Finds layer dependencies from the layer index.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>create-layer</filename>:</emphasis>
- Creates a basic layer.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='creating-a-general-layer-using-the-bitbake-layers-script'>
- <title>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</title>
-
- <para>
- The <filename>bitbake-layers</filename> script with the
- <filename>create-layer</filename> subcommand simplifies
- creating a new general layer.
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- For information on BSP layers, see the
- "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
- section in the Yocto Project Board Specific (BSP)
- Developer's Guide.
- </para></listitem>
- <listitem><para>
- In order to use a layer with the OpenEmbedded
- build system, you need to add the layer to your
- <filename>bblayers.conf</filename> configuration
- file.
- See the
- "<link linkend='adding-a-layer-using-the-bitbake-layers-script'>Adding a Layer Using the <filename>bitbake-layers</filename> Script</link>"
- section for more information.
- </para></listitem>
- </itemizedlist>
- </note>
- The default mode of the script's operation with this
- subcommand is to create a layer with the following:
- <itemizedlist>
- <listitem><para>A layer priority of 6.
- </para></listitem>
- <listitem><para>A <filename>conf</filename>
- subdirectory that contains a
- <filename>layer.conf</filename> file.
- </para></listitem>
- <listitem><para>
- A <filename>recipes-example</filename> subdirectory
- that contains a further subdirectory named
- <filename>example</filename>, which contains
- an <filename>example.bb</filename> recipe file.
- </para></listitem>
- <listitem><para>A <filename >COPYING.MIT</filename>,
- which is the license statement for the layer.
- The script assumes you want to use the MIT license,
- which is typical for most layers, for the contents of
- the layer itself.
- </para></listitem>
- <listitem><para>
- A <filename>README</filename> file, which is a file
- describing the contents of your new layer.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- In its simplest form, you can use the following command form
- to create a layer.
- The command creates a layer whose name corresponds to
- <replaceable>your_layer_name</replaceable> in the current
- directory:
- <literallayout class='monospaced'>
- $ bitbake-layers create-layer <replaceable>your_layer_name</replaceable>
- </literallayout>
- As an example, the following command creates a layer named
- <filename>meta-scottrif</filename> in your home directory:
- <literallayout class='monospaced'>
- $ cd /usr/home
- $ bitbake-layers create-layer meta-scottrif
- NOTE: Starting bitbake server...
- Add your new layer with 'bitbake-layers add-layer meta-scottrif'
- </literallayout>
- </para>
-
- <para>
- If you want to set the priority of the layer to other than the
- default value of "6", you can either use the
- <filename>&dash;&dash;priority</filename> option or you can
- edit the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBFILE_PRIORITY'><filename>BBFILE_PRIORITY</filename></ulink>
- value in the <filename>conf/layer.conf</filename> after the
- script creates it.
- Furthermore, if you want to give the example recipe file
- some name other than the default, you can
- use the
- <filename>&dash;&dash;example-recipe-name</filename> option.
- </para>
-
- <para>
- The easiest way to see how the
- <filename>bitbake-layers create-layer</filename> command
- works is to experiment with the script.
- You can also read the usage information by entering the
- following:
- <literallayout class='monospaced'>
- $ bitbake-layers create-layer --help
- NOTE: Starting bitbake server...
- usage: bitbake-layers create-layer [-h] [--priority PRIORITY]
- [--example-recipe-name EXAMPLERECIPE]
- layerdir
-
- Create a basic layer
-
- positional arguments:
- layerdir Layer directory to create
-
- optional arguments:
- -h, --help show this help message and exit
- --priority PRIORITY, -p PRIORITY
- Layer directory to create
- --example-recipe-name EXAMPLERECIPE, -e EXAMPLERECIPE
- Filename of the example recipe
- </literallayout>
- </para>
- </section>
-
- <section id='adding-a-layer-using-the-bitbake-layers-script'>
- <title>Adding a Layer Using the <filename>bitbake-layers</filename> Script</title>
-
- <para>
- Once you create your general layer, you must add it to your
- <filename>bblayers.conf</filename> file.
- Adding the layer to this configuration file makes the
- OpenEmbedded build system aware of your layer so that it can
- search it for metadata.
- </para>
-
- <para>
- Add your layer by using the
- <filename>bitbake-layers add-layer</filename> command:
- <literallayout class='monospaced'>
- $ bitbake-layers add-layer <replaceable>your_layer_name</replaceable>
- </literallayout>
- Here is an example that adds a layer named
- <filename>meta-scottrif</filename> to the configuration file.
- Following the command that adds the layer is another
- <filename>bitbake-layers</filename> command that shows the
- layers that are in your <filename>bblayers.conf</filename>
- file:
- <literallayout class='monospaced'>
- $ bitbake-layers add-layer meta-scottrif
- NOTE: Starting bitbake server...
- Parsing recipes: 100% |##########################################################| Time: 0:00:49
- Parsing of 1441 .bb files complete (0 cached, 1441 parsed). 2055 targets, 56 skipped, 0 masked, 0 errors.
- $ bitbake-layers show-layers
- NOTE: Starting bitbake server...
- layer path priority
- ==========================================================================
- meta /home/scottrif/poky/meta 5
- meta-poky /home/scottrif/poky/meta-poky 5
- meta-yocto-bsp /home/scottrif/poky/meta-yocto-bsp 5
- workspace /home/scottrif/poky/build/workspace 99
- meta-scottrif /home/scottrif/poky/build/meta-scottrif 6
- </literallayout>
- Adding the layer to this file enables the build system to
- locate the layer during the build.
- <note>
- During a build, the OpenEmbedded build system looks in
- the layers from the top of the list down to the bottom
- in that order.
- </note>
- </para>
- </section>
- </section>
-
- <section id='usingpoky-extend-customimage'>
- <title>Customizing Images</title>
-
- <para>
- You can customize images to satisfy particular requirements.
- This section describes several methods and provides guidelines for each.
- </para>
-
- <section id='usingpoky-extend-customimage-localconf'>
- <title>Customizing Images Using <filename>local.conf</filename></title>
-
- <para>
- Probably the easiest way to customize an image is to add a
- package by way of the <filename>local.conf</filename>
- configuration file.
- Because it is limited to local use, this method generally only
- allows you to add packages and is not as flexible as creating
- your own customized image.
- When you add packages using local variables this way, you need
- to realize that these variable changes are in effect for every
- build and consequently affect all images, which might not
- be what you require.
- </para>
-
- <para>
- To add a package to your image using the local configuration
- file, use the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'>IMAGE_INSTALL</ulink></filename>
- variable with the <filename>_append</filename> operator:
- <literallayout class='monospaced'>
- IMAGE_INSTALL_append = " strace"
- </literallayout>
- Use of the syntax is important - specifically, the space between
- the quote and the package name, which is
- <filename>strace</filename> in this example.
- This space is required since the <filename>_append</filename>
- operator does not add the space.
- </para>
-
- <para>
- Furthermore, you must use <filename>_append</filename> instead
- of the <filename>+=</filename> operator if you want to avoid
- ordering issues.
- The reason for this is because doing so unconditionally appends
- to the variable and avoids ordering problems due to the
- variable being set in image recipes and
- <filename>.bbclass</filename> files with operators like
- <filename>?=</filename>.
- Using <filename>_append</filename> ensures the operation takes
- affect.
- </para>
-
- <para>
- As shown in its simplest use,
- <filename>IMAGE_INSTALL_append</filename> affects all images.
- It is possible to extend the syntax so that the variable
- applies to a specific image only.
- Here is an example:
- <literallayout class='monospaced'>
- IMAGE_INSTALL_append_pn-core-image-minimal = " strace"
- </literallayout>
- This example adds <filename>strace</filename> to the
- <filename>core-image-minimal</filename> image only.
- </para>
-
- <para>
- You can add packages using a similar approach through the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-CORE_IMAGE_EXTRA_INSTALL'>CORE_IMAGE_EXTRA_INSTALL</ulink></filename>
- variable.
- If you use this variable, only
- <filename>core-image-*</filename> images are affected.
- </para>
- </section>
-
- <section id='usingpoky-extend-customimage-imagefeatures'>
- <title>Customizing Images Using Custom <filename>IMAGE_FEATURES</filename> and
- <filename>EXTRA_IMAGE_FEATURES</filename></title>
-
- <para>
- Another method for customizing your image is to enable or
- disable high-level image features by using the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FEATURES'><filename>IMAGE_FEATURES</filename></ulink>
- and <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTRA_IMAGE_FEATURES'><filename>EXTRA_IMAGE_FEATURES</filename></ulink>
- variables.
- Although the functions for both variables are nearly equivalent,
- best practices dictate using <filename>IMAGE_FEATURES</filename>
- from within a recipe and using
- <filename>EXTRA_IMAGE_FEATURES</filename> from within
- your <filename>local.conf</filename> file, which is found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- </para>
-
- <para>
- To understand how these features work, the best reference is
- <filename>meta/classes/core-image.bbclass</filename>.
- This class lists out the available
- <filename>IMAGE_FEATURES</filename> of which most map to
- package groups while some, such as
- <filename>debug-tweaks</filename> and
- <filename>read-only-rootfs</filename>, resolve as general
- configuration settings.
- </para>
-
- <para>
- In summary, the file looks at the contents of the
- <filename>IMAGE_FEATURES</filename> variable and then maps
- or configures the feature accordingly.
- Based on this information, the build system automatically
- adds the appropriate packages or configurations to the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'><filename>IMAGE_INSTALL</filename></ulink>
- variable.
- Effectively, you are enabling extra features by extending the
- class or creating a custom class for use with specialized image
- <filename>.bb</filename> files.
- </para>
-
- <para>
- Use the <filename>EXTRA_IMAGE_FEATURES</filename> variable
- from within your local configuration file.
- Using a separate area from which to enable features with
- this variable helps you avoid overwriting the features in the
- image recipe that are enabled with
- <filename>IMAGE_FEATURES</filename>.
- The value of <filename>EXTRA_IMAGE_FEATURES</filename> is added
- to <filename>IMAGE_FEATURES</filename> within
- <filename>meta/conf/bitbake.conf</filename>.
- </para>
-
- <para>
- To illustrate how you can use these variables to modify your
- image, consider an example that selects the SSH server.
- The Yocto Project ships with two SSH servers you can use
- with your images: Dropbear and OpenSSH.
- Dropbear is a minimal SSH server appropriate for
- resource-constrained environments, while OpenSSH is a
- well-known standard SSH server implementation.
- By default, the <filename>core-image-sato</filename> image
- is configured to use Dropbear.
- The <filename>core-image-full-cmdline</filename> and
- <filename>core-image-lsb</filename> images both
- include OpenSSH.
- The <filename>core-image-minimal</filename> image does not
- contain an SSH server.
- </para>
-
- <para>
- You can customize your image and change these defaults.
- Edit the <filename>IMAGE_FEATURES</filename> variable
- in your recipe or use the
- <filename>EXTRA_IMAGE_FEATURES</filename> in your
- <filename>local.conf</filename> file so that it configures the
- image you are working with to include
- <filename>ssh-server-dropbear</filename> or
- <filename>ssh-server-openssh</filename>.
- </para>
-
- <note>
- See the
- "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>"
- section in the Yocto Project Reference Manual for a complete
- list of image features that ship with the Yocto Project.
- </note>
- </section>
-
- <section id='usingpoky-extend-customimage-custombb'>
- <title>Customizing Images Using Custom .bb Files</title>
-
- <para>
- You can also customize an image by creating a custom recipe
- that defines additional software as part of the image.
- The following example shows the form for the two lines you need:
- <literallayout class='monospaced'>
- IMAGE_INSTALL = "packagegroup-core-x11-base package1 package2"
-
- inherit core-image
- </literallayout>
- </para>
-
- <para>
- Defining the software using a custom recipe gives you total
- control over the contents of the image.
- It is important to use the correct names of packages in the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'>IMAGE_INSTALL</ulink></filename>
- variable.
- You must use the OpenEmbedded notation and not the Debian notation for the names
- (e.g. <filename>glibc-dev</filename> instead of <filename>libc6-dev</filename>).
- </para>
-
- <para>
- The other method for creating a custom image is to base it on an existing image.
- For example, if you want to create an image based on <filename>core-image-sato</filename>
- but add the additional package <filename>strace</filename> to the image,
- copy the <filename>meta/recipes-sato/images/core-image-sato.bb</filename> to a
- new <filename>.bb</filename> and add the following line to the end of the copy:
- <literallayout class='monospaced'>
- IMAGE_INSTALL += "strace"
- </literallayout>
- </para>
- </section>
-
- <section id='usingpoky-extend-customimage-customtasks'>
- <title>Customizing Images Using Custom Package Groups</title>
-
- <para>
- For complex custom images, the best approach for customizing
- an image is to create a custom package group recipe that is
- used to build the image or images.
- A good example of a package group recipe is
- <filename>meta/recipes-core/packagegroups/packagegroup-base.bb</filename>.
- </para>
-
- <para>
- If you examine that recipe, you see that the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGES'>PACKAGES</ulink></filename>
- variable lists the package group packages to produce.
- The <filename>inherit packagegroup</filename> statement
- sets appropriate default values and automatically adds
- <filename>-dev</filename>, <filename>-dbg</filename>, and
- <filename>-ptest</filename> complementary packages for each
- package specified in the <filename>PACKAGES</filename>
- statement.
- <note>
- The <filename>inherit packagegroup</filename> line should be
- located near the top of the recipe, certainly before
- the <filename>PACKAGES</filename> statement.
- </note>
- </para>
-
- <para>
- For each package you specify in <filename>PACKAGES</filename>,
- you can use
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-RDEPENDS'>RDEPENDS</ulink></filename>
- and
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-RRECOMMENDS'>RRECOMMENDS</ulink></filename>
- entries to provide a list of packages the parent task package
- should contain.
- You can see examples of these further down in the
- <filename>packagegroup-base.bb</filename> recipe.
- </para>
-
- <para>
- Here is a short, fabricated example showing the same basic
- pieces for a hypothetical packagegroup defined in
- <filename>packagegroup-custom.bb</filename>, where the
- variable <filename>PN</filename> is the standard way to
- abbreviate the reference to the full packagegroup name
- <filename>packagegroup-custom</filename>:
- <literallayout class='monospaced'>
- DESCRIPTION = "My Custom Package Groups"
-
- inherit packagegroup
-
- PACKAGES = "\
- ${PN}-apps \
- ${PN}-tools \
- "
-
- RDEPENDS_${PN}-apps = "\
- dropbear \
- portmap \
- psplash"
-
- RDEPENDS_${PN}-tools = "\
- oprofile \
- oprofileui-server \
- lttng-tools"
-
- RRECOMMENDS_${PN}-tools = "\
- kernel-module-oprofile"
- </literallayout>
- </para>
-
- <para>
- In the previous example, two package group packages are created with their dependencies and their
- recommended package dependencies listed: <filename>packagegroup-custom-apps</filename>, and
- <filename>packagegroup-custom-tools</filename>.
- To build an image using these package group packages, you need to add
- <filename>packagegroup-custom-apps</filename> and/or
- <filename>packagegroup-custom-tools</filename> to
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'>IMAGE_INSTALL</ulink></filename>.
- For other forms of image dependencies see the other areas of this section.
- </para>
- </section>
-
- <section id='usingpoky-extend-customimage-image-name'>
- <title>Customizing an Image Hostname</title>
-
- <para>
- By default, the configured hostname (i.e.
- <filename>/etc/hostname</filename>) in an image is the
- same as the machine name.
- For example, if
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
- equals "qemux86", the configured hostname written to
- <filename>/etc/hostname</filename> is "qemux86".
- </para>
-
- <para>
- You can customize this name by altering the value of the
- "hostname" variable in the
- <filename>base-files</filename> recipe using either
- an append file or a configuration file.
- Use the following in an append file:
- <literallayout class='monospaced'>
- hostname="myhostname"
- </literallayout>
- Use the following in a configuration file:
- <literallayout class='monospaced'>
- hostname_pn-base-files = "myhostname"
- </literallayout>
- </para>
-
- <para>
- Changing the default value of the variable "hostname" can be
- useful in certain situations.
- For example, suppose you need to do extensive testing on an
- image and you would like to easily identify the image
- under test from existing images with typical default
- hostnames.
- In this situation, you could change the default hostname to
- "testme", which results in all the images using the name
- "testme".
- Once testing is complete and you do not need to rebuild the
- image for test any longer, you can easily reset the default
- hostname.
- </para>
-
- <para>
- Another point of interest is that if you unset the variable,
- the image will have no default hostname in the filesystem.
- Here is an example that unsets the variable in a
- configuration file:
- <literallayout class='monospaced'>
- hostname_pn-base-files = ""
- </literallayout>
- Having no default hostname in the filesystem is suitable for
- environments that use dynamic hostnames such as virtual
- machines.
- </para>
- </section>
- </section>
-
- <section id='new-recipe-writing-a-new-recipe'>
- <title>Writing a New Recipe</title>
-
- <para>
- Recipes (<filename>.bb</filename> files) are fundamental components
- in the Yocto Project environment.
- Each software component built by the OpenEmbedded build system
- requires a recipe to define the component.
- This section describes how to create, write, and test a new
- recipe.
- <note>
- For information on variables that are useful for recipes and
- for information about recipe naming issues, see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#ref-varlocality-recipe-required'>Required</ulink>"
- section of the Yocto Project Reference Manual.
- </note>
- </para>
-
- <section id='new-recipe-overview'>
- <title>Overview</title>
-
- <para>
- The following figure shows the basic process for creating a
- new recipe.
- The remainder of the section provides details for the steps.
- <imagedata fileref="figures/recipe-workflow.png" width="6in" depth="7in" align="center" scalefit="1" />
- </para>
- </section>
-
- <section id='new-recipe-locate-or-automatically-create-a-base-recipe'>
- <title>Locate or Automatically Create a Base Recipe</title>
-
- <para>
- You can always write a recipe from scratch.
- However, three choices exist that can help you quickly get a
- start on a new recipe:
- <itemizedlist>
- <listitem><para>
- <emphasis><filename>devtool add</filename>:</emphasis>
- A command that assists in creating a recipe and
- an environment conducive to development.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>recipetool create</filename>:</emphasis>
- A command provided by the Yocto Project that automates
- creation of a base recipe based on the source
- files.
- </para></listitem>
- <listitem><para>
- <emphasis>Existing Recipes:</emphasis>
- Location and modification of an existing recipe that is
- similar in function to the recipe you need.
- </para></listitem>
- </itemizedlist>
- <note>
- For information on recipe syntax, see the
- "<link linkend='recipe-syntax'>Recipe Syntax</link>"
- section.
- </note>
- </para>
-
- <section id='new-recipe-creating-the-base-recipe-using-devtool'>
- <title>Creating the Base Recipe Using <filename>devtool add</filename></title>
-
- <para>
- The <filename>devtool add</filename> command uses the same
- logic for auto-creating the recipe as
- <filename>recipetool create</filename>, which is listed
- below.
- Additionally, however, <filename>devtool add</filename>
- sets up an environment that makes it easy for you to
- patch the source and to make changes to the recipe as
- is often necessary when adding a recipe to build a new
- piece of software to be included in a build.
- </para>
-
- <para>
- You can find a complete description of the
- <filename>devtool add</filename> command in the
- "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-a-closer-look-at-devtool-add'>A Closer Look at <filename>devtool</filename> add</ulink>"
- section in the Yocto Project Application Development
- and the Extensible Software Development Kit (eSDK) manual.
- </para>
- </section>
-
- <section id='new-recipe-creating-the-base-recipe-using-recipetool'>
- <title>Creating the Base Recipe Using <filename>recipetool create</filename></title>
-
- <para>
- <filename>recipetool create</filename> automates creation
- of a base recipe given a set of source code files.
- As long as you can extract or point to the source files,
- the tool will construct a recipe and automatically
- configure all pre-build information into the recipe.
- For example, suppose you have an application that builds
- using Autotools.
- Creating the base recipe using
- <filename>recipetool</filename> results in a recipe
- that has the pre-build dependencies, license requirements,
- and checksums configured.
- </para>
-
- <para>
- To run the tool, you just need to be in your
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
- and have sourced the build environment setup script
- (i.e.
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>oe-init-build-env</filename></ulink>).
- To get help on the tool, use the following command:
- <literallayout class='monospaced'>
- $ recipetool -h
- NOTE: Starting bitbake server...
- usage: recipetool [-d] [-q] [--color COLOR] [-h] &lt;subcommand&gt; ...
-
- OpenEmbedded recipe tool
-
- options:
- -d, --debug Enable debug output
- -q, --quiet Print only errors
- --color COLOR Colorize output (where COLOR is auto, always, never)
- -h, --help show this help message and exit
-
- subcommands:
- create Create a new recipe
- newappend Create a bbappend for the specified target in the specified
- layer
- setvar Set a variable within a recipe
- appendfile Create/update a bbappend to replace a target file
- appendsrcfiles Create/update a bbappend to add or replace source files
- appendsrcfile Create/update a bbappend to add or replace a source file
- Use recipetool &lt;subcommand&gt; --help to get help on a specific command
- </literallayout>
- </para>
-
- <para>
- Running
- <filename>recipetool create -o</filename>&nbsp;<replaceable>OUTFILE</replaceable>
- creates the base recipe and locates it properly in the
- layer that contains your source files.
- Following are some syntax examples:
- </para>
-
- <para>
- Use this syntax to generate a recipe based on
- <replaceable>source</replaceable>.
- Once generated, the recipe resides in the existing source
- code layer:
- <literallayout class='monospaced'>
- recipetool create -o <replaceable>OUTFILE</replaceable>&nbsp;<replaceable>source</replaceable>
- </literallayout>
- Use this syntax to generate a recipe using code that you
- extract from <replaceable>source</replaceable>.
- The extracted code is placed in its own layer defined
- by <replaceable>EXTERNALSRC</replaceable>.
- <literallayout class='monospaced'>
- recipetool create -o <replaceable>OUTFILE</replaceable> -x <replaceable>EXTERNALSRC</replaceable> <replaceable>source</replaceable>
- </literallayout>
- Use this syntax to generate a recipe based on
- <replaceable>source</replaceable>.
- The options direct <filename>recipetool</filename> to
- generate debugging information.
- Once generated, the recipe resides in the existing source
- code layer:
- <literallayout class='monospaced'>
- recipetool create -d -o <replaceable>OUTFILE</replaceable> <replaceable>source</replaceable>
- </literallayout>
- </para>
- </section>
-
- <section id='new-recipe-locating-and-using-a-similar-recipe'>
- <title>Locating and Using a Similar Recipe</title>
-
- <para>
- Before writing a recipe from scratch, it is often useful to
- discover whether someone else has already written one that
- meets (or comes close to meeting) your needs.
- The Yocto Project and OpenEmbedded communities maintain many
- recipes that might be candidates for what you are doing.
- You can find a good central index of these recipes in the
- <ulink url='http://layers.openembedded.org'>OpenEmbedded Layer Index</ulink>.
- </para>
-
- <para>
- Working from an existing recipe or a skeleton recipe is the
- best way to get started.
- Here are some points on both methods:
- <itemizedlist>
- <listitem><para><emphasis>Locate and modify a recipe that
- is close to what you want to do:</emphasis>
- This method works when you are familiar with the
- current recipe space.
- The method does not work so well for those new to
- the Yocto Project or writing recipes.</para>
- <para>Some risks associated with this method are
- using a recipe that has areas totally unrelated to
- what you are trying to accomplish with your recipe,
- not recognizing areas of the recipe that you might
- have to add from scratch, and so forth.
- All these risks stem from unfamiliarity with the
- existing recipe space.</para></listitem>
- <listitem><para><emphasis>Use and modify the following
- skeleton recipe:</emphasis>
- If for some reason you do not want to use
- <filename>recipetool</filename> and you cannot
- find an existing recipe that is close to meeting
- your needs, you can use the following structure to
- provide the fundamental areas of a new recipe.
- <literallayout class='monospaced'>
- DESCRIPTION = ""
- HOMEPAGE = ""
- LICENSE = ""
- SECTION = ""
- DEPENDS = ""
- LIC_FILES_CHKSUM = ""
-
- SRC_URI = ""
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
- </section>
-
- <section id='new-recipe-storing-and-naming-the-recipe'>
- <title>Storing and Naming the Recipe</title>
-
- <para>
- Once you have your base recipe, you should put it in your
- own layer and name it appropriately.
- Locating it correctly ensures that the OpenEmbedded build
- system can find it when you use BitBake to process the
- recipe.
- </para>
-
- <itemizedlist>
- <listitem><para><emphasis>Storing Your Recipe:</emphasis>
- The OpenEmbedded build system locates your recipe
- through the layer's <filename>conf/layer.conf</filename>
- file and the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBFILES'><filename>BBFILES</filename></ulink>
- variable.
- This variable sets up a path from which the build system can
- locate recipes.
- Here is the typical use:
- <literallayout class='monospaced'>
- BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
- ${LAYERDIR}/recipes-*/*/*.bbappend"
- </literallayout>
- Consequently, you need to be sure you locate your new recipe
- inside your layer such that it can be found.</para>
- <para>You can find more information on how layers are
- structured in the
- "<link linkend='understanding-and-creating-layers'>Understanding and Creating Layers</link>"
- section.</para></listitem>
- <listitem><para><emphasis>Naming Your Recipe:</emphasis>
- When you name your recipe, you need to follow this naming
- convention:
- <literallayout class='monospaced'>
- <replaceable>basename</replaceable>_<replaceable>version</replaceable>.bb
- </literallayout>
- Use lower-cased characters and do not include the reserved
- suffixes <filename>-native</filename>,
- <filename>-cross</filename>, <filename>-initial</filename>,
- or <filename>-dev</filename> casually (i.e. do not use them
- as part of your recipe name unless the string applies).
- Here are some examples:
- <literallayout class='monospaced'>
- cups_1.7.0.bb
- gawk_4.0.2.bb
- irssi_0.8.16-rc1.bb
- </literallayout></para></listitem>
- </itemizedlist>
- </section>
-
- <section id='new-recipe-running-a-build-on-the-recipe'>
- <title>Running a Build on the Recipe</title>
-
- <para>
- Creating a new recipe is usually an iterative process that
- requires using BitBake to process the recipe multiple times in
- order to progressively discover and add information to the
- recipe file.
- </para>
-
- <para>
- Assuming you have sourced the build environment setup script (i.e.
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>)
- and you are in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
- use BitBake to process your recipe.
- All you need to provide is the
- <filename><replaceable>basename</replaceable></filename> of the recipe as described
- in the previous section:
- <literallayout class='monospaced'>
- $ bitbake <replaceable>basename</replaceable>
- </literallayout>
-
- </para>
-
- <para>
- During the build, the OpenEmbedded build system creates a
- temporary work directory for each recipe
- (<filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}</filename>)
- where it keeps extracted source files, log files, intermediate
- compilation and packaging files, and so forth.
- </para>
-
- <para>
- The path to the per-recipe temporary work directory depends
- on the context in which it is being built.
- The quickest way to find this path is to have BitBake return it
- by running the following:
- <literallayout class='monospaced'>
- $ bitbake -e <replaceable>basename</replaceable> | grep ^WORKDIR=
- </literallayout>
- As an example, assume a Source Directory top-level folder named
- <filename>poky</filename>, a default Build Directory at
- <filename>poky/build</filename>, and a
- <filename>qemux86-poky-linux</filename> machine target system.
- Furthermore, suppose your recipe is named
- <filename>foo_1.3.0.bb</filename>.
- In this case, the work directory the build system uses to
- build the package would be as follows:
- <literallayout class='monospaced'>
- poky/build/tmp/work/qemux86-poky-linux/foo/1.3.0-r0
- </literallayout>
- Inside this directory you can find sub-directories such as
- <filename>image</filename>, <filename>packages-split</filename>,
- and <filename>temp</filename>.
- After the build, you can examine these to determine how well
- the build went.
- <note>
- You can find log files for each task in the recipe's
- <filename>temp</filename> directory (e.g.
- <filename>poky/build/tmp/work/qemux86-poky-linux/foo/1.3.0-r0/temp</filename>).
- Log files are named <filename>log.<replaceable>taskname</replaceable></filename>
- (e.g. <filename>log.do_configure</filename>,
- <filename>log.do_fetch</filename>, and
- <filename>log.do_compile</filename>).
- </note>
- </para>
-
- <para>
- You can find more information about the build process in
- "<ulink url='&YOCTO_DOCS_OM_URL;#overview-development-environment'>The Yocto Project Development Environment</ulink>"
- chapter of the Yocto Project Overview and Concepts Manual.
- </para>
- </section>
-
- <section id='new-recipe-fetching-code'>
- <title>Fetching Code</title>
-
- <para>
- The first thing your recipe must do is specify how to fetch
- the source files.
- Fetching is controlled mainly through the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
- variable.
- Your recipe must have a <filename>SRC_URI</filename> variable
- that points to where the source is located.
- For a graphical representation of source locations, see the
- "<ulink url='&YOCTO_DOCS_OM_URL;#sources-dev-environment'>Sources</ulink>"
- section in the Yocto Project Overview and Concepts Manual.
- </para>
-
- <para>
- The
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-fetch'><filename>do_fetch</filename></ulink>
- task uses the prefix of each entry in the
- <filename>SRC_URI</filename> variable value to determine which
- <ulink url='&YOCTO_DOCS_BB_URL;#bb-fetchers'>fetcher</ulink>
- to use to get your source files.
- It is the <filename>SRC_URI</filename> variable that triggers
- the fetcher.
- The
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-patch'><filename>do_patch</filename></ulink>
- task uses the variable after source is fetched to apply
- patches.
- The OpenEmbedded build system uses
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESOVERRIDES'><filename>FILESOVERRIDES</filename></ulink>
- for scanning directory locations for local files in
- <filename>SRC_URI</filename>.
- </para>
-
- <para>
- The <filename>SRC_URI</filename> variable in your recipe must
- define each unique location for your source files.
- It is good practice to not hard-code version numbers in a URL used
- in <filename>SRC_URI</filename>.
- Rather than hard-code these values, use
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink><filename>}</filename>,
- which causes the fetch process to use the version specified in
- the recipe filename.
- Specifying the version in this manner means that upgrading the
- recipe to a future version is as simple as renaming the recipe
- to match the new version.
- </para>
-
- <para>
- Here is a simple example from the
- <filename>meta/recipes-devtools/strace/strace_5.5.bb</filename>
- recipe where the source comes from a single tarball.
- Notice the use of the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>
- variable:
- <literallayout class='monospaced'>
- SRC_URI = "https://strace.io/files/${PV}/strace-${PV}.tar.xz \
- </literallayout>
- </para>
-
- <para>
- Files mentioned in <filename>SRC_URI</filename> whose names end
- in a typical archive extension (e.g. <filename>.tar</filename>,
- <filename>.tar.gz</filename>, <filename>.tar.bz2</filename>,
- <filename>.zip</filename>, and so forth), are automatically
- extracted during the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-unpack'><filename>do_unpack</filename></ulink>
- task.
- For another example that specifies these types of files, see
- the
- "<link linkend='new-recipe-autotooled-package'>Autotooled Package</link>"
- section.
- </para>
-
- <para>
- Another way of specifying source is from an SCM.
- For Git repositories, you must specify
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCREV'><filename>SRCREV</filename></ulink>
- and you should specify
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>
- to include the revision with
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCPV'><filename>SRCPV</filename></ulink>.
- Here is an example from the recipe
- <filename>meta/recipes-kernel/blktrace/blktrace_git.bb</filename>:
- <literallayout class='monospaced'>
- SRCREV = "d6918c8832793b4205ed3bfede78c2f915c23385"
-
- PR = "r6"
- PV = "1.0.5+git${SRCPV}"
-
- SRC_URI = "git://git.kernel.dk/blktrace.git \
- file://ldflags.patch"
- </literallayout>
- </para>
-
- <para>
- If your <filename>SRC_URI</filename> statement includes
- URLs pointing to individual files fetched from a remote server
- other than a version control system, BitBake attempts to
- verify the files against checksums defined in your recipe to
- ensure they have not been tampered with or otherwise modified
- since the recipe was written.
- Two checksums are used:
- <filename>SRC_URI[md5sum]</filename> and
- <filename>SRC_URI[sha256sum]</filename>.
- </para>
-
- <para>
- If your <filename>SRC_URI</filename> variable points to
- more than a single URL (excluding SCM URLs), you need to
- provide the <filename>md5</filename> and
- <filename>sha256</filename> checksums for each URL.
- For these cases, you provide a name for each URL as part of
- the <filename>SRC_URI</filename> and then reference that name
- in the subsequent checksum statements.
- Here is an example combining lines from the files
- <filename>git.inc</filename> and
- <filename>git_2.24.1.bb</filename>:
- <literallayout class='monospaced'>
- SRC_URI = "${KERNELORG_MIRROR}/software/scm/git/git-${PV}.tar.gz;name=tarball \
- ${KERNELORG_MIRROR}/software/scm/git/git-manpages-${PV}.tar.gz;name=manpages"
-
- SRC_URI[tarball.md5sum] = "166bde96adbbc11c8843d4f8f4f9811b"
- SRC_URI[tarball.sha256sum] = "ad5334956301c86841eb1e5b1bb20884a6bad89a10a6762c958220c7cf64da02"
- SRC_URI[manpages.md5sum] = "31c2272a8979022497ba3d4202df145d"
- SRC_URI[manpages.sha256sum] = "9a7ae3a093bea39770eb96ca3e5b40bff7af0b9f6123f089d7821d0e5b8e1230"
- </literallayout>
- </para>
-
- <para>
- Proper values for <filename>md5</filename> and
- <filename>sha256</filename> checksums might be available
- with other signatures on the download page for the upstream
- source (e.g. <filename>md5</filename>,
- <filename>sha1</filename>, <filename>sha256</filename>,
- <filename>GPG</filename>, and so forth).
- Because the OpenEmbedded build system only deals with
- <filename>sha256sum</filename> and <filename>md5sum</filename>,
- you should verify all the signatures you find by hand.
- </para>
-
- <para>
- If no <filename>SRC_URI</filename> checksums are specified
- when you attempt to build the recipe, or you provide an
- incorrect checksum, the build will produce an error for each
- missing or incorrect checksum.
- As part of the error message, the build system provides
- the checksum string corresponding to the fetched file.
- Once you have the correct checksums, you can copy and paste
- them into your recipe and then run the build again to continue.
- <note>
- As mentioned, if the upstream source provides signatures
- for verifying the downloaded source code, you should
- verify those manually before setting the checksum values
- in the recipe and continuing with the build.
- </note>
- </para>
-
- <para>
- This final example is a bit more complicated and is from the
- <filename>meta/recipes-sato/rxvt-unicode/rxvt-unicode_9.20.bb</filename>
- recipe.
- The example's <filename>SRC_URI</filename> statement identifies
- multiple files as the source files for the recipe: a tarball, a
- patch file, a desktop file, and an icon.
- <literallayout class='monospaced'>
- SRC_URI = "http://dist.schmorp.de/rxvt-unicode/Attic/rxvt-unicode-${PV}.tar.bz2 \
- file://xwc.patch \
- file://rxvt.desktop \
- file://rxvt.png"
- </literallayout>
- </para>
-
- <para>
- When you specify local files using the
- <filename>file://</filename> URI protocol, the build system
- fetches files from the local machine.
- The path is relative to the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
- variable and searches specific directories in a certain order:
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-BP'><filename>BP</filename></ulink><filename>}</filename>,
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-BPN'><filename>BPN</filename></ulink><filename>}</filename>,
- and <filename>files</filename>.
- The directories are assumed to be subdirectories of the
- directory in which the recipe or append file resides.
- For another example that specifies these types of files, see the
- "<link linkend='new-recipe-single-c-file-package-hello-world'>Single .c File Package (Hello World!)</link>"
- section.
- </para>
-
- <para>
- The previous example also specifies a patch file.
- Patch files are files whose names usually end in
- <filename>.patch</filename> or <filename>.diff</filename> but
- can end with compressed suffixes such as
- <filename>diff.gz</filename> and
- <filename>patch.bz2</filename>, for example.
- The build system automatically applies patches as described
- in the
- "<link linkend='new-recipe-patching-code'>Patching Code</link>" section.
- </para>
- </section>
-
- <section id='new-recipe-unpacking-code'>
- <title>Unpacking Code</title>
-
- <para>
- During the build, the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-unpack'><filename>do_unpack</filename></ulink>
- task unpacks the source with
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink><filename>}</filename>
- pointing to where it is unpacked.
- </para>
-
- <para>
- If you are fetching your source files from an upstream source
- archived tarball and the tarball's internal structure matches
- the common convention of a top-level subdirectory named
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-BPN'><filename>BPN</filename></ulink><filename>}-${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink><filename>}</filename>,
- then you do not need to set <filename>S</filename>.
- However, if <filename>SRC_URI</filename> specifies to fetch
- source from an archive that does not use this convention,
- or from an SCM like Git or Subversion, your recipe needs to
- define <filename>S</filename>.
- </para>
-
- <para>
- If processing your recipe using BitBake successfully unpacks
- the source files, you need to be sure that the directory
- pointed to by <filename>${S}</filename> matches the structure
- of the source.
- </para>
- </section>
-
- <section id='new-recipe-patching-code'>
- <title>Patching Code</title>
-
- <para>
- Sometimes it is necessary to patch code after it has been
- fetched.
- Any files mentioned in <filename>SRC_URI</filename> whose
- names end in <filename>.patch</filename> or
- <filename>.diff</filename> or compressed versions of these
- suffixes (e.g. <filename>diff.gz</filename> are treated as
- patches.
- The
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-patch'><filename>do_patch</filename></ulink>
- task automatically applies these patches.
- </para>
-
- <para>
- The build system should be able to apply patches with the "-p1"
- option (i.e. one directory level in the path will be stripped
- off).
- If your patch needs to have more directory levels stripped off,
- specify the number of levels using the "striplevel" option in
- the <filename>SRC_URI</filename> entry for the patch.
- Alternatively, if your patch needs to be applied in a specific
- subdirectory that is not specified in the patch file, use the
- "patchdir" option in the entry.
- </para>
-
- <para>
- As with all local files referenced in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
- using <filename>file://</filename>, you should place
- patch files in a directory next to the recipe either
- named the same as the base name of the recipe
- (<ulink url='&YOCTO_DOCS_REF_URL;#var-BP'><filename>BP</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BPN'><filename>BPN</filename></ulink>)
- or "files".
- </para>
- </section>
-
- <section id='new-recipe-licensing'>
- <title>Licensing</title>
-
- <para>
- Your recipe needs to have both the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LICENSE'><filename>LICENSE</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LIC_FILES_CHKSUM'><filename>LIC_FILES_CHKSUM</filename></ulink>
- variables:
- <itemizedlist>
- <listitem><para><emphasis><filename>LICENSE</filename>:</emphasis>
- This variable specifies the license for the software.
- If you do not know the license under which the software
- you are building is distributed, you should go to the
- source code and look for that information.
- Typical files containing this information include
- <filename>COPYING</filename>,
- <filename>LICENSE</filename>, and
- <filename>README</filename> files.
- You could also find the information near the top of
- a source file.
- For example, given a piece of software licensed under
- the GNU General Public License version 2, you would
- set <filename>LICENSE</filename> as follows:
- <literallayout class='monospaced'>
- LICENSE = "GPLv2"
- </literallayout></para>
- <para>The licenses you specify within
- <filename>LICENSE</filename> can have any name as long
- as you do not use spaces, since spaces are used as
- separators between license names.
- For standard licenses, use the names of the files in
- <filename>meta/files/common-licenses/</filename>
- or the <filename>SPDXLICENSEMAP</filename> flag names
- defined in <filename>meta/conf/licenses.conf</filename>.
- </para></listitem>
- <listitem><para><emphasis><filename>LIC_FILES_CHKSUM</filename>:</emphasis>
- The OpenEmbedded build system uses this variable to
- make sure the license text has not changed.
- If it has, the build produces an error and it affords
- you the chance to figure it out and correct the problem.
- </para>
- <para>You need to specify all applicable licensing
- files for the software.
- At the end of the configuration step, the build process
- will compare the checksums of the files to be sure
- the text has not changed.
- Any differences result in an error with the message
- containing the current checksum.
- For more explanation and examples of how to set the
- <filename>LIC_FILES_CHKSUM</filename> variable, see the
- "<link link='usingpoky-configuring-LIC_FILES_CHKSUM'>Tracking License Changes</link>"
- section.</para>
-
- <para>To determine the correct checksum string, you
- can list the appropriate files in the
- <filename>LIC_FILES_CHKSUM</filename> variable with
- incorrect md5 strings, attempt to build the software,
- and then note the resulting error messages that will
- report the correct md5 strings.
- See the
- "<link linkend='new-recipe-fetching-code'>Fetching Code</link>"
- section for additional information.
- </para>
-
- <para>
- Here is an example that assumes the software has a
- <filename>COPYING</filename> file:
- <literallayout class='monospaced'>
- LIC_FILES_CHKSUM = "file://COPYING;md5=xxx"
- </literallayout>
- When you try to build the software, the build system
- will produce an error and give you the correct string
- that you can substitute into the recipe file for a
- subsequent build.
- </para></listitem>
- </itemizedlist>
- </para>
-
-<!--
-
- <para>
- For trying this out I created a new recipe named
- <filename>htop_1.0.2.bb</filename> and put it in
- <filename>poky/meta/recipes-extended/htop</filename>.
- There are two license type statements in my very simple
- recipe:
- <literallayout class='monospaced'>
- LICENSE = ""
-
- LIC_FILES_CHKSUM = ""
-
- SRC_URI[md5sum] = ""
- SRC_URI[sha256sum] = ""
- </literallayout>
- Evidently, you need to run a <filename>bitbake -c cleanall htop</filename>.
- Next, you delete or comment out the two <filename>SRC_URI</filename>
- lines at the end and then attempt to build the software with
- <filename>bitbake htop</filename>.
- Doing so causes BitBake to report some errors and and give
- you the actual strings you need for the last two
- <filename>SRC_URI</filename> lines.
- Prior to this, you have to dig around in the home page of the
- source for <filename>htop</filename> and determine that the
- software is released under GPLv2.
- You can provide that in the <filename>LICENSE</filename>
- statement.
- Now you edit your recipe to have those two strings for
- the <filename>SRC_URI</filename> statements:
- <literallayout class='monospaced'>
- LICENSE = "GPLv2"
-
- LIC_FILES_CHKSUM = ""
-
- SRC_URI = "${SOURCEFORGE_MIRROR}/htop/htop-${PV}.tar.gz"
- SRC_URI[md5sum] = "0d01cca8df3349c74569cefebbd9919e"
- SRC_URI[sha256sum] = "ee60657b044ece0df096c053060df7abf3cce3a568ab34d260049e6a37ccd8a1"
- </literallayout>
- At this point, you can build the software again using the
- <filename>bitbake htop</filename> command.
- There is just a set of errors now associated with the
- empty <filename>LIC_FILES_CHKSUM</filename> variable now.
- </para>
--->
-
- </section>
-
- <section id='new-dependencies'>
- <title>Dependencies</title>
-
- <para>
- Most software packages have a short list of other packages
- that they require, which are called dependencies.
- These dependencies fall into two main categories: build-time
- dependencies, which are required when the software is built;
- and runtime dependencies, which are required to be installed
- on the target in order for the software to run.
- </para>
-
- <para>
- Within a recipe, you specify build-time dependencies using the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
- variable.
- Although nuances exist, items specified in
- <filename>DEPENDS</filename> should be names of other recipes.
- It is important that you specify all build-time dependencies
- explicitly.
- If you do not, due to the parallel nature of BitBake's
- execution, you can end up with a race condition where the
- dependency is present for one task of a recipe (e.g.
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-configure'><filename>do_configure</filename></ulink>)
- and then gone when the next task runs (e.g.
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-compile'><filename>do_compile</filename></ulink>).
- </para>
-
- <para>
- Another consideration is that configure scripts might
- automatically check for optional dependencies and enable
- corresponding functionality if those dependencies are found.
- This behavior means that to ensure deterministic results and
- thus avoid more race conditions, you need to either explicitly
- specify these dependencies as well, or tell the configure
- script explicitly to disable the functionality.
- If you wish to make a recipe that is more generally useful
- (e.g. publish the recipe in a layer for others to use),
- instead of hard-disabling the functionality, you can use the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGECONFIG'><filename>PACKAGECONFIG</filename></ulink>
- variable to allow functionality and the corresponding
- dependencies to be enabled and disabled easily by other
- users of the recipe.
- </para>
-
- <para>
- Similar to build-time dependencies, you specify runtime
- dependencies through a variable -
- <ulink url='&YOCTO_DOCS_REF_URL;#var-RDEPENDS'><filename>RDEPENDS</filename></ulink>,
- which is package-specific.
- All variables that are package-specific need to have the name
- of the package added to the end as an override.
- Since the main package for a recipe has the same name as the
- recipe, and the recipe's name can be found through the
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink><filename>}</filename>
- variable, then you specify the dependencies for the main
- package by setting <filename>RDEPENDS_${PN}</filename>.
- If the package were named <filename>${PN}-tools</filename>,
- then you would set <filename>RDEPENDS_${PN}-tools</filename>,
- and so forth.
- </para>
-
- <para>
- Some runtime dependencies will be set automatically at
- packaging time.
- These dependencies include any shared library dependencies
- (i.e. if a package "example" contains "libexample" and
- another package "mypackage" contains a binary that links to
- "libexample" then the OpenEmbedded build system will
- automatically add a runtime dependency to "mypackage" on
- "example").
- See the
- "<ulink url='&YOCTO_DOCS_OM_URL;#automatically-added-runtime-dependencies'>Automatically Added Runtime Dependencies</ulink>"
- section in the Yocto Project Overview and Concepts Manual for
- further details.
- </para>
- </section>
-
- <section id='new-recipe-configuring-the-recipe'>
- <title>Configuring the Recipe</title>
-
- <para>
- Most software provides some means of setting build-time
- configuration options before compilation.
- Typically, setting these options is accomplished by running a
- configure script with options, or by modifying a build
- configuration file.
- <note>
- As of Yocto Project Release 1.7, some of the core recipes
- that package binary configuration scripts now disable the
- scripts due to the scripts previously requiring error-prone
- path substitution.
- The OpenEmbedded build system uses
- <filename>pkg-config</filename> now, which is much more
- robust.
- You can find a list of the <filename>*-config</filename>
- scripts that are disabled list in the
- "<ulink url='&YOCTO_DOCS_REF_URL;#migration-1.7-binary-configuration-scripts-disabled'>Binary Configuration Scripts Disabled</ulink>"
- section in the Yocto Project Reference Manual.
- </note>
- </para>
-
- <para>
- A major part of build-time configuration is about checking for
- build-time dependencies and possibly enabling optional
- functionality as a result.
- You need to specify any build-time dependencies for the
- software you are building in your recipe's
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
- value, in terms of other recipes that satisfy those
- dependencies.
- You can often find build-time or runtime
- dependencies described in the software's documentation.
- </para>
-
- <para>
- The following list provides configuration items of note based
- on how your software is built:
- <itemizedlist>
- <listitem><para><emphasis>Autotools:</emphasis>
- If your source files have a
- <filename>configure.ac</filename> file, then your
- software is built using Autotools.
- If this is the case, you just need to worry about
- modifying the configuration.</para>
-
- <para>When using Autotools, your recipe needs to inherit
- the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-autotools'><filename>autotools</filename></ulink>
- class and your recipe does not have to contain a
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-configure'><filename>do_configure</filename></ulink>
- task.
- However, you might still want to make some adjustments.
- For example, you can set
- <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTRA_OECONF'><filename>EXTRA_OECONF</filename></ulink>
- or
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGECONFIG_CONFARGS'><filename>PACKAGECONFIG_CONFARGS</filename></ulink>
- to pass any needed configure options that are specific
- to the recipe.
- </para></listitem>
- <listitem><para><emphasis>CMake:</emphasis>
- If your source files have a
- <filename>CMakeLists.txt</filename> file, then your
- software is built using CMake.
- If this is the case, you just need to worry about
- modifying the configuration.</para>
-
- <para>When you use CMake, your recipe needs to inherit
- the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-cmake'><filename>cmake</filename></ulink>
- class and your recipe does not have to contain a
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-configure'><filename>do_configure</filename></ulink>
- task.
- You can make some adjustments by setting
- <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTRA_OECMAKE'><filename>EXTRA_OECMAKE</filename></ulink>
- to pass any needed configure options that are specific
- to the recipe.
- <note>
- If you need to install one or more custom CMake
- toolchain files that are supplied by the
- application you are building, install the files to
- <filename>${D}${datadir}/cmake/</filename> Modules
- during
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-install'><filename>do_install</filename></ulink>.
- </note>
- </para></listitem>
- <listitem><para><emphasis>Other:</emphasis>
- If your source files do not have a
- <filename>configure.ac</filename> or
- <filename>CMakeLists.txt</filename> file, then your
- software is built using some method other than Autotools
- or CMake.
- If this is the case, you normally need to provide a
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-configure'><filename>do_configure</filename></ulink>
- task in your recipe
- unless, of course, there is nothing to configure.
- </para>
- <para>Even if your software is not being built by
- Autotools or CMake, you still might not need to deal
- with any configuration issues.
- You need to determine if configuration is even a required step.
- You might need to modify a Makefile or some configuration file
- used for the build to specify necessary build options.
- Or, perhaps you might need to run a provided, custom
- configure script with the appropriate options.</para>
- <para>For the case involving a custom configure
- script, you would run
- <filename>./configure --help</filename> and look for
- the options you need to set.</para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Once configuration succeeds, it is always good practice to
- look at the <filename>log.do_configure</filename> file to
- ensure that the appropriate options have been enabled and no
- additional build-time dependencies need to be added to
- <filename>DEPENDS</filename>.
- For example, if the configure script reports that it found
- something not mentioned in <filename>DEPENDS</filename>, or
- that it did not find something that it needed for some
- desired optional functionality, then you would need to add
- those to <filename>DEPENDS</filename>.
- Looking at the log might also reveal items being checked for,
- enabled, or both that you do not want, or items not being found
- that are in <filename>DEPENDS</filename>, in which case
- you would need to look at passing extra options to the
- configure script as needed.
- For reference information on configure options specific to the
- software you are building, you can consult the output of the
- <filename>./configure --help</filename> command within
- <filename>${S}</filename> or consult the software's upstream
- documentation.
- </para>
- </section>
-
- <section id='new-recipe-using-headers-to-interface-with-devices'>
- <title>Using Headers to Interface with Devices</title>
-
- <para>
- If your recipe builds an application that needs to
- communicate with some device or needs an API into a custom
- kernel, you will need to provide appropriate header files.
- Under no circumstances should you ever modify the existing
- <filename>meta/recipes-kernel/linux-libc-headers/linux-libc-headers.inc</filename>
- file.
- These headers are used to build <filename>libc</filename> and
- must not be compromised with custom or machine-specific
- header information.
- If you customize <filename>libc</filename> through modified
- headers all other applications that use
- <filename>libc</filename> thus become affected.
- <note><title>Warning</title>
- Never copy and customize the <filename>libc</filename>
- header file (i.e.
- <filename>meta/recipes-kernel/linux-libc-headers/linux-libc-headers.inc</filename>).
- </note>
- The correct way to interface to a device or custom kernel is
- to use a separate package that provides the additional headers
- for the driver or other unique interfaces.
- When doing so, your application also becomes responsible for
- creating a dependency on that specific provider.
- </para>
-
- <para>
- Consider the following:
- <itemizedlist>
- <listitem><para>
- Never modify
- <filename>linux-libc-headers.inc</filename>.
- Consider that file to be part of the
- <filename>libc</filename> system, and not something
- you use to access the kernel directly.
- You should access <filename>libc</filename> through
- specific <filename>libc</filename> calls.
- </para></listitem>
- <listitem><para>
- Applications that must talk directly to devices
- should either provide necessary headers themselves,
- or establish a dependency on a special headers package
- that is specific to that driver.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- For example, suppose you want to modify an existing header
- that adds I/O control or network support.
- If the modifications are used by a small number programs,
- providing a unique version of a header is easy and has little
- impact.
- When doing so, bear in mind the guidelines in the previous
- list.
- <note>
- If for some reason your changes need to modify the behavior
- of the <filename>libc</filename>, and subsequently all
- other applications on the system, use a
- <filename>.bbappend</filename> to modify the
- <filename>linux-kernel-headers.inc</filename> file.
- However, take care to not make the changes
- machine specific.
- </note>
- </para>
-
- <para>
- Consider a case where your kernel is older and you need
- an older <filename>libc</filename> ABI.
- The headers installed by your recipe should still be a
- standard mainline kernel, not your own custom one.
- </para>
-
- <para>
- When you use custom kernel headers you need to get them from
- <ulink url='&YOCTO_DOCS_REF_URL;#var-STAGING_KERNEL_DIR'><filename>STAGING_KERNEL_DIR</filename></ulink>,
- which is the directory with kernel headers that are
- required to build out-of-tree modules.
- Your recipe will also need the following:
- <literallayout class='monospaced'>
- do_configure[depends] += "virtual/kernel:do_shared_workdir"
- </literallayout>
- </para>
- </section>
-
- <section id='new-recipe-compilation'>
- <title>Compilation</title>
-
- <para>
- During a build, the <filename>do_compile</filename> task
- happens after source is fetched, unpacked, and configured.
- If the recipe passes through <filename>do_compile</filename>
- successfully, nothing needs to be done.
- </para>
-
- <para>
- However, if the compile step fails, you need to diagnose the
- failure.
- Here are some common issues that cause failures.
- <note>
- For cases where improper paths are detected for
- configuration files or for when libraries/headers cannot
- be found, be sure you are using the more robust
- <filename>pkg-config</filename>.
- See the note in section
- "<link linkend='new-recipe-configuring-the-recipe'>Configuring the Recipe</link>"
- for additional information.
- </note>
- <itemizedlist>
- <listitem><para><emphasis>Parallel build failures:</emphasis>
- These failures manifest themselves as intermittent
- errors, or errors reporting that a file or directory
- that should be created by some other part of the build
- process could not be found.
- This type of failure can occur even if, upon inspection,
- the file or directory does exist after the build has
- failed, because that part of the build process happened
- in the wrong order.</para>
- <para>To fix the problem, you need to either satisfy
- the missing dependency in the Makefile or whatever
- script produced the Makefile, or (as a workaround)
- set
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PARALLEL_MAKE'><filename>PARALLEL_MAKE</filename></ulink>
- to an empty string:
- <literallayout class='monospaced'>
- PARALLEL_MAKE = ""
- </literallayout></para>
- <para>
- For information on parallel Makefile issues, see the
- "<link linkend='debugging-parallel-make-races'>Debugging Parallel Make Races</link>"
- section.
- </para></listitem>
- <listitem><para><emphasis>Improper host path usage:</emphasis>
- This failure applies to recipes building for the target
- or <filename>nativesdk</filename> only.
- The failure occurs when the compilation process uses
- improper headers, libraries, or other files from the
- host system when cross-compiling for the target.
- </para>
- <para>To fix the problem, examine the
- <filename>log.do_compile</filename> file to identify
- the host paths being used (e.g.
- <filename>/usr/include</filename>,
- <filename>/usr/lib</filename>, and so forth) and then
- either add configure options, apply a patch, or do both.
- </para></listitem>
- <listitem><para><emphasis>Failure to find required
- libraries/headers:</emphasis>
- If a build-time dependency is missing because it has
- not been declared in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>,
- or because the dependency exists but the path used by
- the build process to find the file is incorrect and the
- configure step did not detect it, the compilation
- process could fail.
- For either of these failures, the compilation process
- notes that files could not be found.
- In these cases, you need to go back and add additional
- options to the configure script as well as possibly
- add additional build-time dependencies to
- <filename>DEPENDS</filename>.</para>
- <para>Occasionally, it is necessary to apply a patch
- to the source to ensure the correct paths are used.
- If you need to specify paths to find files staged
- into the sysroot from other recipes, use the variables
- that the OpenEmbedded build system provides
- (e.g.
- <filename>STAGING_BINDIR</filename>,
- <filename>STAGING_INCDIR</filename>,
- <filename>STAGING_DATADIR</filename>, and so forth).
-<!--
- (e.g.
- <ulink url='&YOCTO_DOCS_REF_URL;#var-STAGING_BINDIR'><filename>STAGING_BINDIR</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-STAGING_INCDIR'><filename>STAGING_INCDIR</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-STAGING_DATADIR'><filename>STAGING_DATADIR</filename></ulink>,
- and so forth).
--->
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='new-recipe-installing'>
- <title>Installing</title>
-
- <para>
- During <filename>do_install</filename>, the task copies the
- built files along with their hierarchy to locations that
- would mirror their locations on the target device.
- The installation process copies files from the
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink><filename>}</filename>,
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-B'><filename>B</filename></ulink><filename>}</filename>,
- and
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}</filename>
- directories to the
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-D'><filename>D</filename></ulink><filename>}</filename>
- directory to create the structure as it should appear on the
- target system.
- </para>
-
- <para>
- How your software is built affects what you must do to be
- sure your software is installed correctly.
- The following list describes what you must do for installation
- depending on the type of build system used by the software
- being built:
- <itemizedlist>
- <listitem><para><emphasis>Autotools and CMake:</emphasis>
- If the software your recipe is building uses Autotools
- or CMake, the OpenEmbedded build
- system understands how to install the software.
- Consequently, you do not have to have a
- <filename>do_install</filename> task as part of your
- recipe.
- You just need to make sure the install portion of the
- build completes with no issues.
- However, if you wish to install additional files not
- already being installed by
- <filename>make install</filename>, you should do this
- using a <filename>do_install_append</filename> function
- using the install command as described in
- the "Manual" bulleted item later in this list.
- </para></listitem>
- <listitem><para><emphasis>Other (using
- <filename>make install</filename>):</emphasis>
- You need to define a
- <filename>do_install</filename> function in your
- recipe.
- The function should call
- <filename>oe_runmake install</filename> and will likely
- need to pass in the destination directory as well.
- How you pass that path is dependent on how the
- <filename>Makefile</filename> being run is written
- (e.g. <filename>DESTDIR=${D}</filename>,
- <filename>PREFIX=${D}</filename>,
- <filename>INSTALLROOT=${D}</filename>, and so forth).
- </para>
- <para>For an example recipe using
- <filename>make install</filename>, see the
- "<link linkend='new-recipe-makefile-based-package'>Makefile-Based Package</link>"
- section.</para></listitem>
- <listitem><para><emphasis>Manual:</emphasis>
- You need to define a
- <filename>do_install</filename> function in your
- recipe.
- The function must first use
- <filename>install -d</filename> to create the
- directories under
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-D'><filename>D</filename></ulink><filename>}</filename>.
- Once the directories exist, your function can use
- <filename>install</filename> to manually install the
- built software into the directories.</para>
- <para>You can find more information on
- <filename>install</filename> at
- <ulink url='http://www.gnu.org/software/coreutils/manual/html_node/install-invocation.html'></ulink>.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- For the scenarios that do not use Autotools or
- CMake, you need to track the installation
- and diagnose and fix any issues until everything installs
- correctly.
- You need to look in the default location of
- <filename>${D}</filename>, which is
- <filename>${WORKDIR}/image</filename>, to be sure your
- files have been installed correctly.
- </para>
-
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- During the installation process, you might need to
- modify some of the installed files to suit the target
- layout.
- For example, you might need to replace hard-coded paths
- in an initscript with values of variables provided by
- the build system, such as replacing
- <filename>/usr/bin/</filename> with
- <filename>${bindir}</filename>.
- If you do perform such modifications during
- <filename>do_install</filename>, be sure to modify the
- destination file after copying rather than before
- copying.
- Modifying after copying ensures that the build system
- can re-execute <filename>do_install</filename> if
- needed.
- </para></listitem>
- <listitem><para>
- <filename>oe_runmake install</filename>, which can be
- run directly or can be run indirectly by the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-autotools'><filename>autotools</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-cmake'><filename>cmake</filename></ulink>
- classes, runs <filename>make install</filename> in
- parallel.
- Sometimes, a Makefile can have missing dependencies
- between targets that can result in race conditions.
- If you experience intermittent failures during
- <filename>do_install</filename>, you might be able to
- work around them by disabling parallel Makefile
- installs by adding the following to the recipe:
- <literallayout class='monospaced'>
- PARALLEL_MAKEINST = ""
- </literallayout>
- See
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PARALLEL_MAKEINST'><filename>PARALLEL_MAKEINST</filename></ulink>
- for additional information.
- </para></listitem>
- <listitem><para>
- If you need to install one or more custom CMake
- toolchain files that are supplied by the
- application you are building, install the files to
- <filename>${D}${datadir}/cmake/</filename> Modules
- during
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-install'><filename>do_install</filename></ulink>.
- </para></listitem>
- </itemizedlist>
- </note>
- </section>
-
- <section id='new-recipe-enabling-system-services'>
- <title>Enabling System Services</title>
-
- <para>
- If you want to install a service, which is a process that
- usually starts on boot and runs in the background, then
- you must include some additional definitions in your recipe.
- </para>
-
- <para>
- If you are adding services and the service initialization
- script or the service file itself is not installed, you must
- provide for that installation in your recipe using a
- <filename>do_install_append</filename> function.
- If your recipe already has a <filename>do_install</filename>
- function, update the function near its end rather than
- adding an additional <filename>do_install_append</filename>
- function.
- </para>
-
- <para>
- When you create the installation for your services, you need
- to accomplish what is normally done by
- <filename>make install</filename>.
- In other words, make sure your installation arranges the output
- similar to how it is arranged on the target system.
- </para>
-
- <para>
- The OpenEmbedded build system provides support for starting
- services two different ways:
- <itemizedlist>
- <listitem><para><emphasis>SysVinit:</emphasis>
- SysVinit is a system and service manager that
- manages the init system used to control the very basic
- functions of your system.
- The init program is the first program
- started by the Linux kernel when the system boots.
- Init then controls the startup, running and shutdown
- of all other programs.</para>
- <para>To enable a service using SysVinit, your recipe
- needs to inherit the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-update-rc.d'><filename>update-rc.d</filename></ulink>
- class.
- The class helps facilitate safely installing the
- package on the target.</para>
- <para>You will need to set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-INITSCRIPT_PACKAGES'><filename>INITSCRIPT_PACKAGES</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-INITSCRIPT_NAME'><filename>INITSCRIPT_NAME</filename></ulink>,
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-INITSCRIPT_PARAMS'><filename>INITSCRIPT_PARAMS</filename></ulink>
- variables within your recipe.</para></listitem>
- <listitem><para><emphasis>systemd:</emphasis>
- System Management Daemon (systemd) was designed to
- replace SysVinit and to provide
- enhanced management of services.
- For more information on systemd, see the systemd
- homepage at
- <ulink url='http://freedesktop.org/wiki/Software/systemd/'></ulink>.
- </para>
- <para>To enable a service using systemd, your recipe
- needs to inherit the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-systemd'><filename>systemd</filename></ulink>
- class.
- See the <filename>systemd.bbclass</filename> file
- located in your
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
- section for more information.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='new-recipe-packaging'>
- <title>Packaging</title>
-
- <para>
- Successful packaging is a combination of automated processes
- performed by the OpenEmbedded build system and some
- specific steps you need to take.
- The following list describes the process:
- <itemizedlist>
- <listitem><para><emphasis>Splitting Files</emphasis>:
- The <filename>do_package</filename> task splits the
- files produced by the recipe into logical components.
- Even software that produces a single binary might
- still have debug symbols, documentation, and other
- logical components that should be split out.
- The <filename>do_package</filename> task ensures
- that files are split up and packaged correctly.
- </para></listitem>
- <listitem><para><emphasis>Running QA Checks</emphasis>:
- The
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-insane'><filename>insane</filename></ulink>
- class adds a step to
- the package generation process so that output quality
- assurance checks are generated by the OpenEmbedded
- build system.
- This step performs a range of checks to be sure the
- build's output is free of common problems that show
- up during runtime.
- For information on these checks, see the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-insane'><filename>insane</filename></ulink>
- class and the
- "<ulink url='&YOCTO_DOCS_REF_URL;#ref-qa-checks'>QA Error and Warning Messages</ulink>"
- chapter in the Yocto Project Reference Manual.
- </para></listitem>
- <listitem><para><emphasis>Hand-Checking Your Packages</emphasis>:
- After you build your software, you need to be sure
- your packages are correct.
- Examine the
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}/packages-split</filename>
- directory and make sure files are where you expect
- them to be.
- If you discover problems, you can set
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGES'><filename>PACKAGES</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILES'><filename>FILES</filename></ulink>,
- <filename>do_install(_append)</filename>, and so forth as
- needed.
- </para></listitem>
- <listitem><para><emphasis>Splitting an Application into Multiple Packages</emphasis>:
- If you need to split an application into several
- packages, see the
- "<link linkend='splitting-an-application-into-multiple-packages'>Splitting an Application into Multiple Packages</link>"
- section for an example.
- </para></listitem>
- <listitem><para><emphasis>Installing a Post-Installation Script</emphasis>:
- For an example showing how to install a
- post-installation script, see the
- "<link linkend='new-recipe-post-installation-scripts'>Post-Installation Scripts</link>"
- section.
- </para></listitem>
- <listitem><para><emphasis>Marking Package Architecture</emphasis>:
- Depending on what your recipe is building and how it
- is configured, it might be important to mark the
- packages produced as being specific to a particular
- machine, or to mark them as not being specific to
- a particular machine or architecture at all.</para>
- <para>By default, packages apply to any machine with the
- same architecture as the target machine.
- When a recipe produces packages that are
- machine-specific (e.g. the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
- value is passed into the configure script or a patch
- is applied only for a particular machine), you should
- mark them as such by adding the following to the
- recipe:
- <literallayout class='monospaced'>
- PACKAGE_ARCH = "${MACHINE_ARCH}"
- </literallayout></para>
- <para>On the other hand, if the recipe produces packages
- that do not contain anything specific to the target
- machine or architecture at all (e.g. recipes
- that simply package script files or configuration
- files), you should use the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-allarch'><filename>allarch</filename></ulink>
- class to do this for you by adding this to your
- recipe:
- <literallayout class='monospaced'>
- inherit allarch
- </literallayout>
- Ensuring that the package architecture is correct is
- not critical while you are doing the first few builds
- of your recipe.
- However, it is important in order
- to ensure that your recipe rebuilds (or does not
- rebuild) appropriately in response to changes in
- configuration, and to ensure that you get the
- appropriate packages installed on the target machine,
- particularly if you run separate builds for more
- than one target machine.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='new-sharing-files-between-recipes'>
- <title>Sharing Files Between Recipes</title>
-
- <para>
- Recipes often need to use files provided by other recipes on
- the build host.
- For example, an application linking to a common library needs
- access to the library itself and its associated headers.
- The way this access is accomplished is by populating a sysroot
- with files.
- Each recipe has two sysroots in its work directory, one for
- target files
- (<filename>recipe-sysroot</filename>) and one for files that
- are native to the build host
- (<filename>recipe-sysroot-native</filename>).
- <note>
- You could find the term "staging" used within the Yocto
- project regarding files populating sysroots (e.g. the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-STAGING_DIR'><filename>STAGING_DIR</filename></ulink>
- variable).
- </note>
- </para>
-
- <para>
- Recipes should never populate the sysroot directly (i.e. write
- files into sysroot).
- Instead, files should be installed into standard locations
- during the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-install'><filename>do_install</filename></ulink>
- task within the
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-D'><filename>D</filename></ulink><filename>}</filename>
- directory.
- The reason for this limitation is that almost all files that
- populate the sysroot are cataloged in manifests in order to
- ensure the files can be removed later when a recipe is either
- modified or removed.
- Thus, the sysroot is able to remain free from stale files.
- </para>
-
- <para>
- A subset of the files installed by the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-install'><filename>do_install</filename></ulink>
- task are used by the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-populate_sysroot'><filename>do_populate_sysroot</filename></ulink>
- task as defined by the the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SYSROOT_DIRS'><filename>SYSROOT_DIRS</filename></ulink>
- variable to automatically populate the sysroot.
- It is possible to modify the list of directories that populate
- the sysroot.
- The following example shows how you could add the
- <filename>/opt</filename> directory to the list of
- directories within a recipe:
- <literallayout class='monospaced'>
- SYSROOT_DIRS += "/opt"
- </literallayout>
- </para>
-
- <para>
- For a more complete description of the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-populate_sysroot'><filename>do_populate_sysroot</filename></ulink>
- task and its associated functions, see the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-staging'><filename>staging</filename></ulink>
- class.
- </para>
- </section>
-
- <section id='metadata-virtual-providers'>
- <title>Using Virtual Providers</title>
-
- <para>
- Prior to a build, if you know that several different recipes
- provide the same functionality, you can use a virtual provider
- (i.e. <filename>virtual/*</filename>) as a placeholder for the
- actual provider.
- The actual provider is determined at build-time.
- </para>
-
- <para>
- A common scenario where a virtual provider is used would be
- for the kernel recipe.
- Suppose you have three kernel recipes whose
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink>
- values map to <filename>kernel-big</filename>,
- <filename>kernel-mid</filename>, and
- <filename>kernel-small</filename>.
- Furthermore, each of these recipes in some way uses a
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PROVIDES'><filename>PROVIDES</filename></ulink>
- statement that essentially identifies itself as being able
- to provide <filename>virtual/kernel</filename>.
- Here is one way through the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-kernel'><filename>kernel</filename></ulink>
- class:
- <literallayout class='monospaced'>
- PROVIDES += "${@ "virtual/kernel" if (d.getVar("KERNEL_PACKAGE_NAME") == "kernel") else "" }"
- </literallayout>
- Any recipe that inherits the <filename>kernel</filename> class
- is going to utilize a <filename>PROVIDES</filename> statement
- that identifies that recipe as being able to provide the
- <filename>virtual/kernel</filename> item.
- </para>
-
- <para>
- Now comes the time to actually build an image and you need a
- kernel recipe, but which one?
- You can configure your build to call out the kernel recipe
- you want by using the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PREFERRED_PROVIDER'><filename>PREFERRED_PROVIDER</filename></ulink>
- variable.
- As an example, consider the
- <ulink url='https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/conf/machine/include/x86-base.inc'><filename>x86-base.inc</filename></ulink>
- include file, which is a machine
- (i.e. <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>)
- configuration file.
- This include file is the reason all x86-based machines use the
- <filename>linux-yocto</filename> kernel.
- Here are the relevant lines from the include file:
- <literallayout class='monospaced'>
- PREFERRED_PROVIDER_virtual/kernel ??= "linux-yocto"
- PREFERRED_VERSION_linux-yocto ??= "4.15%"
- </literallayout>
- </para>
-
- <para>
- When you use a virtual provider, you do not have to
- "hard code" a recipe name as a build dependency.
- You can use the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
- variable to state the build is dependent on
- <filename>virtual/kernel</filename> for example:
- <literallayout class='monospaced'>
- DEPENDS = "virtual/kernel"
- </literallayout>
- During the build, the OpenEmbedded build system picks
- the correct recipe needed for the
- <filename>virtual/kernel</filename> dependency based on the
- <filename>PREFERRED_PROVIDER</filename> variable.
- If you want to use the small kernel mentioned at the beginning
- of this section, configure your build as follows:
- <literallayout class='monospaced'>
- PREFERRED_PROVIDER_virtual/kernel ??= "kernel-small"
- </literallayout>
- <note>
- Any recipe that
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PROVIDES'><filename>PROVIDES</filename></ulink>
- a <filename>virtual/*</filename> item that is ultimately
- not selected through
- <filename>PREFERRED_PROVIDER</filename> does not get built.
- Preventing these recipes from building is usually the
- desired behavior since this mechanism's purpose is to
- select between mutually exclusive alternative providers.
- </note>
- </para>
-
- <para>
- The following lists specific examples of virtual providers:
- <itemizedlist>
- <listitem><para>
- <filename>virtual/kernel</filename>:
- Provides the name of the kernel recipe to use when
- building a kernel image.
- </para></listitem>
- <listitem><para>
- <filename>virtual/bootloader</filename>:
- Provides the name of the bootloader to use when
- building an image.
- </para></listitem>
- <listitem><para>
- <filename>virtual/mesa</filename>:
- Provides <filename>gbm.pc</filename>.
- </para></listitem>
- <listitem><para>
- <filename>virtual/egl</filename>:
- Provides <filename>egl.pc</filename> and possibly
- <filename>wayland-egl.pc</filename>.
- </para></listitem>
- <listitem><para>
- <filename>virtual/libgl</filename>:
- Provides <filename>gl.pc</filename> (i.e. libGL).
- </para></listitem>
- <listitem><para>
- <filename>virtual/libgles1</filename>:
- Provides <filename>glesv1_cm.pc</filename>
- (i.e. libGLESv1_CM).
- </para></listitem>
- <listitem><para>
- <filename>virtual/libgles2</filename>:
- Provides <filename>glesv2.pc</filename>
- (i.e. libGLESv2).
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='properly-versioning-pre-release-recipes'>
- <title>Properly Versioning Pre-Release Recipes</title>
-
- <para>
- Sometimes the name of a recipe can lead to versioning
- problems when the recipe is upgraded to a final release.
- For example, consider the
- <filename>irssi_0.8.16-rc1.bb</filename> recipe file in
- the list of example recipes in the
- "<link linkend='new-recipe-storing-and-naming-the-recipe'>Storing and Naming the Recipe</link>"
- section.
- This recipe is at a release candidate stage (i.e.
- "rc1").
- When the recipe is released, the recipe filename becomes
- <filename>irssi_0.8.16.bb</filename>.
- The version change from <filename>0.8.16-rc1</filename>
- to <filename>0.8.16</filename> is seen as a decrease by the
- build system and package managers, so the resulting packages
- will not correctly trigger an upgrade.
- </para>
-
- <para>
- In order to ensure the versions compare properly, the
- recommended convention is to set
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>
- within the recipe to
- "<replaceable>previous_version</replaceable>+<replaceable>current_version</replaceable>".
- You can use an additional variable so that you can use the
- current version elsewhere.
- Here is an example:
- <literallayout class='monospaced'>
- REALPV = "0.8.16-rc1"
- PV = "0.8.15+${REALPV}"
- </literallayout>
- </para>
- </section>
-
- <section id='new-recipe-post-installation-scripts'>
- <title>Post-Installation Scripts</title>
-
- <para>
- Post-installation scripts run immediately after installing
- a package on the target or during image creation when a
- package is included in an image.
- To add a post-installation script to a package, add a
- <filename>pkg_postinst_</filename><replaceable>PACKAGENAME</replaceable><filename>()</filename> function to
- the recipe file (<filename>.bb</filename>) and replace
- <replaceable>PACKAGENAME</replaceable> with the name of the package
- you want to attach to the <filename>postinst</filename>
- script.
- To apply the post-installation script to the main package
- for the recipe, which is usually what is required, specify
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink><filename>}</filename>
- in place of <replaceable>PACKAGENAME</replaceable>.
- </para>
-
- <para>
- A post-installation function has the following structure:
- <literallayout class='monospaced'>
- pkg_postinst_<replaceable>PACKAGENAME</replaceable>() {
- # Commands to carry out
- }
- </literallayout>
- </para>
-
- <para>
- The script defined in the post-installation function is
- called when the root filesystem is created.
- If the script succeeds, the package is marked as installed.
- If the script fails, the package is marked as unpacked and
- the script is executed when the image boots again.
- <note>
- Any RPM post-installation script that runs on the target
- should return a 0 exit code.
- RPM does not allow non-zero exit codes for these scripts,
- and the RPM package manager will cause the package to fail
- installation on the target.
- </note>
- </para>
-
- <para>
- Sometimes it is necessary for the execution of a
- post-installation script to be delayed until the first boot.
- For example, the script might need to be executed on the
- device itself.
- To delay script execution until boot time, you must explicitly
- mark post installs to defer to the target.
- You can use <filename>pkg_postinst_ontarget()</filename> or
- call
- <filename>postinst_intercept delay_to_first_boot</filename>
- from <filename>pkg_postinst()</filename>.
- Any failure of a <filename>pkg_postinst()</filename> script
- (including exit 1) triggers an error during the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-rootfs'><filename>do_rootfs</filename></ulink>
- task.
- </para>
-
- <para>
- If you have recipes that use
- <filename>pkg_postinst</filename> function
- and they require the use of non-standard native
- tools that have dependencies during rootfs construction, you
- need to use the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_WRITE_DEPS'><filename>PACKAGE_WRITE_DEPS</filename></ulink>
- variable in your recipe to list these tools.
- If you do not use this variable, the tools might be missing and
- execution of the post-installation script is deferred until
- first boot.
- Deferring the script to first boot is undesirable and for
- read-only rootfs impossible.
- </para>
-
- <note>
- Equivalent support for pre-install, pre-uninstall, and
- post-uninstall scripts exist by way of
- <filename>pkg_preinst</filename>,
- <filename>pkg_prerm</filename>, and
- <filename>pkg_postrm</filename>, respectively.
- These scrips work in exactly the same way as does
- <filename>pkg_postinst</filename> with the exception
- that they run at different times.
- Also, because of when they run, they are not applicable to
- being run at image creation time like
- <filename>pkg_postinst</filename>.
- </note>
- </section>
-
- <section id='new-recipe-testing'>
- <title>Testing</title>
-
- <para>
- The final step for completing your recipe is to be sure that
- the software you built runs correctly.
- To accomplish runtime testing, add the build's output
- packages to your image and test them on the target.
- </para>
-
- <para>
- For information on how to customize your image by adding
- specific packages, see the
- "<link linkend='usingpoky-extend-customimage'>Customizing Images</link>"
- section.
- </para>
- </section>
-
- <section id='new-recipe-testing-examples'>
- <title>Examples</title>
-
- <para>
- To help summarize how to write a recipe, this section provides
- some examples given various scenarios:
- <itemizedlist>
- <listitem><para>Recipes that use local files</para></listitem>
- <listitem><para>Using an Autotooled package</para></listitem>
- <listitem><para>Using a Makefile-based package</para></listitem>
- <listitem><para>Splitting an application into multiple packages</para></listitem>
- <listitem><para>Adding binaries to an image</para></listitem>
- </itemizedlist>
- </para>
-
- <section id='new-recipe-single-c-file-package-hello-world'>
- <title>Single .c File Package (Hello World!)</title>
-
- <para>
- Building an application from a single file that is stored
- locally (e.g. under <filename>files</filename>) requires
- a recipe that has the file listed in the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'>SRC_URI</ulink></filename>
- variable.
- Additionally, you need to manually write the
- <filename>do_compile</filename> and
- <filename>do_install</filename> tasks.
- The <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-S'>S</ulink></filename>
- variable defines the directory containing the source code,
- which is set to
- <ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink>
- in this case - the directory BitBake uses for the build.
- <literallayout class='monospaced'>
- SUMMARY = "Simple helloworld application"
- SECTION = "examples"
- LICENSE = "MIT"
- LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
-
- SRC_URI = "file://helloworld.c"
-
- S = "${WORKDIR}"
-
- do_compile() {
- ${CC} helloworld.c -o helloworld
- }
-
- do_install() {
- install -d ${D}${bindir}
- install -m 0755 helloworld ${D}${bindir}
- }
- </literallayout>
- </para>
-
- <para>
- By default, the <filename>helloworld</filename>,
- <filename>helloworld-dbg</filename>, and
- <filename>helloworld-dev</filename> packages are built.
- For information on how to customize the packaging process,
- see the
- "<link linkend='splitting-an-application-into-multiple-packages'>Splitting an Application into Multiple Packages</link>"
- section.
- </para>
- </section>
-
- <section id='new-recipe-autotooled-package'>
- <title>Autotooled Package</title>
- <para>
- Applications that use Autotools such as <filename>autoconf</filename> and
- <filename>automake</filename> require a recipe that has a source archive listed in
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'>SRC_URI</ulink></filename> and
- also inherit the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-autotools'><filename>autotools</filename></ulink>
- class, which contains the definitions of all the steps
- needed to build an Autotool-based application.
- The result of the build is automatically packaged.
- And, if the application uses NLS for localization, packages with local information are
- generated (one package per language).
- Following is one example: (<filename>hello_2.3.bb</filename>)
- <literallayout class='monospaced'>
- SUMMARY = "GNU Helloworld application"
- SECTION = "examples"
- LICENSE = "GPLv2+"
- LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
-
- SRC_URI = "${GNU_MIRROR}/hello/hello-${PV}.tar.gz"
-
- inherit autotools gettext
- </literallayout>
- </para>
-
- <para>
- The variable
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-LIC_FILES_CHKSUM'>LIC_FILES_CHKSUM</ulink></filename>
- is used to track source license changes as described in the
- "<link linkend='usingpoky-configuring-LIC_FILES_CHKSUM'>Tracking License Changes</link>"
- section in the Yocto Project Overview and Concepts Manual.
- You can quickly create Autotool-based recipes in a manner
- similar to the previous example.
- </para>
- </section>
-
- <section id='new-recipe-makefile-based-package'>
- <title>Makefile-Based Package</title>
-
- <para>
- Applications that use GNU <filename>make</filename> also require a recipe that has
- the source archive listed in
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'>SRC_URI</ulink></filename>.
- You do not need to add a <filename>do_compile</filename> step since by default BitBake
- starts the <filename>make</filename> command to compile the application.
- If you need additional <filename>make</filename> options, you should store them in the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTRA_OEMAKE'><filename>EXTRA_OEMAKE</filename></ulink>
- or
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGECONFIG_CONFARGS'><filename>PACKAGECONFIG_CONFARGS</filename></ulink>
- variables.
- BitBake passes these options into the GNU <filename>make</filename> invocation.
- Note that a <filename>do_install</filename> task is still required.
- Otherwise, BitBake runs an empty <filename>do_install</filename> task by default.
- </para>
-
- <para>
- Some applications might require extra parameters to be passed to the compiler.
- For example, the application might need an additional header path.
- You can accomplish this by adding to the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-CFLAGS'>CFLAGS</ulink></filename> variable.
- The following example shows this:
- <literallayout class='monospaced'>
- CFLAGS_prepend = "-I ${S}/include "
- </literallayout>
- </para>
-
- <para>
- In the following example, <filename>mtd-utils</filename> is a makefile-based package:
- <literallayout class='monospaced'>
- SUMMARY = "Tools for managing memory technology devices"
- SECTION = "base"
- DEPENDS = "zlib lzo e2fsprogs util-linux"
- HOMEPAGE = "http://www.linux-mtd.infradead.org/"
- LICENSE = "GPLv2+"
- LIC_FILES_CHKSUM = "file://COPYING;md5=0636e73ff0215e8d672dc4c32c317bb3 \
- file://include/common.h;beginline=1;endline=17;md5=ba05b07912a44ea2bf81ce409380049c"
-
- # Use the latest version at 26 Oct, 2013
- SRCREV = "9f107132a6a073cce37434ca9cda6917dd8d866b"
- SRC_URI = "git://git.infradead.org/mtd-utils.git \
- file://add-exclusion-to-mkfs-jffs2-git-2.patch \
- "
-
- PV = "1.5.1+git${SRCPV}"
-
- S = "${WORKDIR}/git"
-
- EXTRA_OEMAKE = "'CC=${CC}' 'RANLIB=${RANLIB}' 'AR=${AR}' 'CFLAGS=${CFLAGS} -I${S}/include -DWITHOUT_XATTR' 'BUILDDIR=${S}'"
-
- do_install () {
- oe_runmake install DESTDIR=${D} SBINDIR=${sbindir} MANDIR=${mandir} INCLUDEDIR=${includedir}
- }
-
- PACKAGES =+ "mtd-utils-jffs2 mtd-utils-ubifs mtd-utils-misc"
-
- FILES_mtd-utils-jffs2 = "${sbindir}/mkfs.jffs2 ${sbindir}/jffs2dump ${sbindir}/jffs2reader ${sbindir}/sumtool"
- FILES_mtd-utils-ubifs = "${sbindir}/mkfs.ubifs ${sbindir}/ubi*"
- FILES_mtd-utils-misc = "${sbindir}/nftl* ${sbindir}/ftl* ${sbindir}/rfd* ${sbindir}/doc* ${sbindir}/serve_image ${sbindir}/recv_image"
-
- PARALLEL_MAKE = ""
-
- BBCLASSEXTEND = "native"
- </literallayout>
- </para>
- </section>
-
- <section id='splitting-an-application-into-multiple-packages'>
- <title>Splitting an Application into Multiple Packages</title>
-
- <para>
- You can use the variables
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGES'>PACKAGES</ulink></filename> and
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-FILES'>FILES</ulink></filename>
- to split an application into multiple packages.
- </para>
-
- <para>
- Following is an example that uses the <filename>libxpm</filename> recipe.
- By default, this recipe generates a single package that contains the library along
- with a few binaries.
- You can modify the recipe to split the binaries into separate packages:
- <literallayout class='monospaced'>
- require xorg-lib-common.inc
-
- SUMMARY = "Xpm: X Pixmap extension library"
- LICENSE = "BSD"
- LIC_FILES_CHKSUM = "file://COPYING;md5=51f4270b012ecd4ab1a164f5f4ed6cf7"
- DEPENDS += "libxext libsm libxt"
- PE = "1"
-
- XORG_PN = "libXpm"
-
- PACKAGES =+ "sxpm cxpm"
- FILES_cxpm = "${bindir}/cxpm"
- FILES_sxpm = "${bindir}/sxpm"
- </literallayout>
- </para>
-
- <para>
- In the previous example, we want to ship the <filename>sxpm</filename>
- and <filename>cxpm</filename> binaries in separate packages.
- Since <filename>bindir</filename> would be packaged into the main
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'>PN</ulink></filename>
- package by default, we prepend the <filename>PACKAGES</filename>
- variable so additional package names are added to the start of list.
- This results in the extra <filename>FILES_*</filename>
- variables then containing information that define which files and
- directories go into which packages.
- Files included by earlier packages are skipped by latter packages.
- Thus, the main <filename>PN</filename> package
- does not include the above listed files.
- </para>
- </section>
-
- <section id='packaging-externally-produced-binaries'>
- <title>Packaging Externally Produced Binaries</title>
-
- <para>
- Sometimes, you need to add pre-compiled binaries to an
- image.
- For example, suppose that binaries for proprietary code
- exist, which are created by a particular division of a
- company.
- Your part of the company needs to use those binaries as
- part of an image that you are building using the
- OpenEmbedded build system.
- Since you only have the binaries and not the source code,
- you cannot use a typical recipe that expects to fetch the
- source specified in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
- and then compile it.
- </para>
-
- <para>
- One method is to package the binaries and then install them
- as part of the image.
- Generally, it is not a good idea to package binaries
- since, among other things, it can hinder the ability to
- reproduce builds and could lead to compatibility problems
- with ABI in the future.
- However, sometimes you have no choice.
- </para>
-
- <para>
- The easiest solution is to create a recipe that uses
- the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-bin-package'><filename>bin_package</filename></ulink>
- class and to be sure that you are using default locations
- for build artifacts.
- In most cases, the <filename>bin_package</filename> class
- handles "skipping" the configure and compile steps as well
- as sets things up to grab packages from the appropriate
- area.
- In particular, this class sets <filename>noexec</filename>
- on both the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-configure'><filename>do_configure</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-compile'><filename>do_compile</filename></ulink>
- tasks, sets
- <filename>FILES_${PN}</filename> to "/" so that it picks
- up all files, and sets up a
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-install'><filename>do_install</filename></ulink>
- task, which effectively copies all files from
- <filename>${S}</filename> to <filename>${D}</filename>.
- The <filename>bin_package</filename> class works well when
- the files extracted into <filename>${S}</filename> are
- already laid out in the way they should be laid out
- on the target.
- For more information on these variables, see the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILES'><filename>FILES</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink>,
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-D'><filename>D</filename></ulink>
- variables in the Yocto Project Reference Manual's variable
- glossary.
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- Using
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
- is a good idea even for components distributed
- in binary form, and is often necessary for
- shared libraries.
- For a shared library, listing the library
- dependencies in
- <filename>DEPENDS</filename> makes sure that
- the libraries are available in the staging
- sysroot when other recipes link against the
- library, which might be necessary for
- successful linking.
- </para></listitem>
- <listitem><para>
- Using <filename>DEPENDS</filename> also
- allows runtime dependencies between packages
- to be added automatically.
- See the
- "<ulink url='&YOCTO_DOCS_OM_URL;#automatically-added-runtime-dependencies'>Automatically Added Runtime Dependencies</ulink>"
- section in the Yocto Project Overview and
- Concepts Manual for more information.
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
-
- <para>
- If you cannot use the <filename>bin_package</filename>
- class, you need to be sure you are doing the following:
- <itemizedlist>
- <listitem><para>
- Create a recipe where the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-configure'><filename>do_configure</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-compile'><filename>do_compile</filename></ulink>
- tasks do nothing:
- It is usually sufficient to just not define these
- tasks in the recipe, because the default
- implementations do nothing unless a Makefile is
- found in
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink><filename>}</filename>.
- </para>
-
- <para>If
- <filename>${S}</filename> might contain a Makefile,
- or if you inherit some class that replaces
- <filename>do_configure</filename> and
- <filename>do_compile</filename> with custom
- versions, then you can use the
- <filename>[</filename><ulink url='&YOCTO_DOCS_BB_URL;#variable-flags'><filename>noexec</filename></ulink><filename>]</filename>
- flag to turn the tasks into no-ops, as follows:
- <literallayout class='monospaced'>
- do_configure[noexec] = "1"
- do_compile[noexec] = "1"
- </literallayout>
- Unlike
- <ulink url='&YOCTO_DOCS_BB_URL;#deleting-a-task'><filename>deleting the tasks</filename></ulink>,
- using the flag preserves the dependency chain from
- the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-fetch'><filename>do_fetch</filename></ulink>, <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-unpack'><filename>do_unpack</filename></ulink>,
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-patch'><filename>do_patch</filename></ulink>
- tasks to the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-install'><filename>do_install</filename></ulink>
- task.
- </para></listitem>
- <listitem><para>Make sure your
- <filename>do_install</filename> task installs the
- binaries appropriately.
- </para></listitem>
- <listitem><para>Ensure that you set up
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILES'><filename>FILES</filename></ulink>
- (usually
- <filename>FILES_${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink><filename>}</filename>)
- to point to the files you have installed, which of
- course depends on where you have installed them
- and whether those files are in different locations
- than the defaults.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
- </section>
-
- <section id="following-recipe-style-guidelines">
- <title>Following Recipe Style Guidelines</title>
-
- <para>
- When writing recipes, it is good to conform to existing
- style guidelines.
- The
- <ulink url='http://www.openembedded.org/wiki/Styleguide'>OpenEmbedded Styleguide</ulink>
- wiki page provides rough guidelines for preferred recipe style.
- </para>
-
- <para>
- It is common for existing recipes to deviate a bit from this
- style.
- However, aiming for at least a consistent style is a good idea.
- Some practices, such as omitting spaces around
- <filename>=</filename> operators in assignments or ordering
- recipe components in an erratic way, are widely seen as poor
- style.
- </para>
- </section>
-
- <section id='recipe-syntax'>
- <title>Recipe Syntax</title>
-
- <para>
- Understanding recipe file syntax is important for writing
- recipes.
- The following list overviews the basic items that make up a
- BitBake recipe file.
- For more complete BitBake syntax descriptions, see the
- "<ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual-metadata'>Syntax and Operators</ulink>"
- chapter of the BitBake User Manual.
- <itemizedlist>
- <listitem><para>
- <emphasis>Variable Assignments and Manipulations:</emphasis>
- Variable assignments allow a value to be assigned to a
- variable.
- The assignment can be static text or might include
- the contents of other variables.
- In addition to the assignment, appending and prepending
- operations are also supported.</para>
-
- <para>The following example shows some of the ways
- you can use variables in recipes:
- <literallayout class='monospaced'>
- S = "${WORKDIR}/postfix-${PV}"
- CFLAGS += "-DNO_ASM"
- SRC_URI_append = " file://fixup.patch"
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Functions:</emphasis>
- Functions provide a series of actions to be performed.
- You usually use functions to override the default
- implementation of a task function or to complement
- a default function (i.e. append or prepend to an
- existing function).
- Standard functions use <filename>sh</filename> shell
- syntax, although access to OpenEmbedded variables and
- internal methods are also available.</para>
-
- <para>The following is an example function from the
- <filename>sed</filename> recipe:
- <literallayout class='monospaced'>
- do_install () {
- autotools_do_install
- install -d ${D}${base_bindir}
- mv ${D}${bindir}/sed ${D}${base_bindir}/sed
- rmdir ${D}${bindir}/
- }
- </literallayout>
- It is also possible to implement new functions that
- are called between existing tasks as long as the
- new functions are not replacing or complementing the
- default functions.
- You can implement functions in Python
- instead of shell.
- Both of these options are not seen in the majority of
- recipes.
- </para></listitem>
- <listitem><para><emphasis>Keywords:</emphasis>
- BitBake recipes use only a few keywords.
- You use keywords to include common
- functions (<filename>inherit</filename>), load parts
- of a recipe from other files
- (<filename>include</filename> and
- <filename>require</filename>) and export variables
- to the environment (<filename>export</filename>).
- </para>
-
- <para>The following example shows the use of some of
- these keywords:
- <literallayout class='monospaced'>
- export POSTCONF = "${STAGING_BINDIR}/postconf"
- inherit autoconf
- require otherfile.inc
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Comments (#):</emphasis>
- Any lines that begin with the hash character
- (<filename>#</filename>) are treated as comment lines
- and are ignored:
- <literallayout class='monospaced'>
- # This is a comment
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- This next list summarizes the most important and most commonly
- used parts of the recipe syntax.
- For more information on these parts of the syntax, you can
- reference the
- <ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual-metadata'>Syntax and Operators</ulink>
- chapter in the BitBake User Manual.
- <itemizedlist>
- <listitem><para>
- <emphasis>Line Continuation (\):</emphasis>
- Use the backward slash (<filename>\</filename>)
- character to split a statement over multiple lines.
- Place the slash character at the end of the line that
- is to be continued on the next line:
- <literallayout class='monospaced'>
- VAR = "A really long \
- line"
- </literallayout>
- <note>
- You cannot have any characters including spaces
- or tabs after the slash character.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Using Variables (${<replaceable>VARNAME</replaceable>}):</emphasis>
- Use the <filename>${<replaceable>VARNAME</replaceable>}</filename>
- syntax to access the contents of a variable:
- <literallayout class='monospaced'>
- SRC_URI = "${SOURCEFORGE_MIRROR}/libpng/zlib-${PV}.tar.gz"
- </literallayout>
- <note>
- It is important to understand that the value of a
- variable expressed in this form does not get
- substituted automatically.
- The expansion of these expressions happens
- on-demand later (e.g. usually when a function that
- makes reference to the variable executes).
- This behavior ensures that the values are most
- appropriate for the context in which they are
- finally used.
- On the rare occasion that you do need the variable
- expression to be expanded immediately, you can use
- the <filename>:=</filename> operator instead of
- <filename>=</filename> when you make the
- assignment, but this is not generally needed.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Quote All Assignments ("<replaceable>value</replaceable>"):</emphasis>
- Use double quotes around values in all variable
- assignments (e.g.
- <filename>"<replaceable>value</replaceable>"</filename>).
- Following is an example:
- <literallayout class='monospaced'>
- VAR1 = "${OTHERVAR}"
- VAR2 = "The version is ${PV}"
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Conditional Assignment (?=):</emphasis>
- Conditional assignment is used to assign a
- value to a variable, but only when the variable is
- currently unset.
- Use the question mark followed by the equal sign
- (<filename>?=</filename>) to make a "soft" assignment
- used for conditional assignment.
- Typically, "soft" assignments are used in the
- <filename>local.conf</filename> file for variables
- that are allowed to come through from the external
- environment.
- </para>
-
- <para>Here is an example where
- <filename>VAR1</filename> is set to "New value" if
- it is currently empty.
- However, if <filename>VAR1</filename> has already been
- set, it remains unchanged:
- <literallayout class='monospaced'>
- VAR1 ?= "New value"
- </literallayout>
- In this next example, <filename>VAR1</filename>
- is left with the value "Original value":
- <literallayout class='monospaced'>
- VAR1 = "Original value"
- VAR1 ?= "New value"
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Appending (+=):</emphasis>
- Use the plus character followed by the equals sign
- (<filename>+=</filename>) to append values to existing
- variables.
- <note>
- This operator adds a space between the existing
- content of the variable and the new content.
- </note></para>
-
- <para>Here is an example:
- <literallayout class='monospaced'>
- SRC_URI += "file://fix-makefile.patch"
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Prepending (=+):</emphasis>
- Use the equals sign followed by the plus character
- (<filename>=+</filename>) to prepend values to existing
- variables.
- <note>
- This operator adds a space between the new content
- and the existing content of the variable.
- </note></para>
-
- <para>Here is an example:
- <literallayout class='monospaced'>
- VAR =+ "Starts"
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Appending (_append):</emphasis>
- Use the <filename>_append</filename> operator to
- append values to existing variables.
- This operator does not add any additional space.
- Also, the operator is applied after all the
- <filename>+=</filename>, and
- <filename>=+</filename> operators have been applied and
- after all <filename>=</filename> assignments have
- occurred.
- </para>
-
- <para>The following example shows the space being
- explicitly added to the start to ensure the appended
- value is not merged with the existing value:
- <literallayout class='monospaced'>
- SRC_URI_append = " file://fix-makefile.patch"
- </literallayout>
- You can also use the <filename>_append</filename>
- operator with overrides, which results in the actions
- only being performed for the specified target or
- machine:
- <literallayout class='monospaced'>
- SRC_URI_append_sh4 = " file://fix-makefile.patch"
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Prepending (_prepend):</emphasis>
- Use the <filename>_prepend</filename> operator to
- prepend values to existing variables.
- This operator does not add any additional space.
- Also, the operator is applied after all the
- <filename>+=</filename>, and
- <filename>=+</filename> operators have been applied and
- after all <filename>=</filename> assignments have
- occurred.
- </para>
-
- <para>The following example shows the space being
- explicitly added to the end to ensure the prepended
- value is not merged with the existing value:
- <literallayout class='monospaced'>
- CFLAGS_prepend = "-I${S}/myincludes "
- </literallayout>
- You can also use the <filename>_prepend</filename>
- operator with overrides, which results in the actions
- only being performed for the specified target or
- machine:
- <literallayout class='monospaced'>
- CFLAGS_prepend_sh4 = "-I${S}/myincludes "
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Overrides:</emphasis>
- You can use overrides to set a value conditionally,
- typically based on how the recipe is being built.
- For example, to set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-KBRANCH'><filename>KBRANCH</filename></ulink>
- variable's value to "standard/base" for any target
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>,
- except for qemuarm where it should be set to
- "standard/arm-versatile-926ejs", you would do the
- following:
- <literallayout class='monospaced'>
- KBRANCH = "standard/base"
- KBRANCH_qemuarm = "standard/arm-versatile-926ejs"
- </literallayout>
- Overrides are also used to separate alternate values
- of a variable in other situations.
- For example, when setting variables such as
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILES'><filename>FILES</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-RDEPENDS'><filename>RDEPENDS</filename></ulink>
- that are specific to individual packages produced by
- a recipe, you should always use an override that
- specifies the name of the package.
- </para></listitem>
- <listitem><para>
- <emphasis>Indentation:</emphasis>
- Use spaces for indentation rather than than tabs.
- For shell functions, both currently work.
- However, it is a policy decision of the Yocto Project
- to use tabs in shell functions.
- Realize that some layers have a policy to use spaces
- for all indentation.
- </para></listitem>
- <listitem><para>
- <emphasis>Using Python for Complex Operations:</emphasis>
- For more advanced processing, it is possible to use
- Python code during variable assignments (e.g.
- search and replacement on a variable).</para>
-
- <para>You indicate Python code using the
- <filename>${@<replaceable>python_code</replaceable>}</filename>
- syntax for the variable assignment:
- <literallayout class='monospaced'>
- SRC_URI = "ftp://ftp.info-zip.org/pub/infozip/src/zip${@d.getVar('PV',1).replace('.', '')}.tgz
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Shell Function Syntax:</emphasis>
- Write shell functions as if you were writing a shell
- script when you describe a list of actions to take.
- You should ensure that your script works with a generic
- <filename>sh</filename> and that it does not require
- any <filename>bash</filename> or other shell-specific
- functionality.
- The same considerations apply to various system
- utilities (e.g. <filename>sed</filename>,
- <filename>grep</filename>, <filename>awk</filename>,
- and so forth) that you might wish to use.
- If in doubt, you should check with multiple
- implementations - including those from BusyBox.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
- </section>
-
- <section id="platdev-newmachine">
- <title>Adding a New Machine</title>
-
- <para>
- Adding a new machine to the Yocto Project is a straightforward
- process.
- This section describes how to add machines that are similar
- to those that the Yocto Project already supports.
- <note>
- Although well within the capabilities of the Yocto Project,
- adding a totally new architecture might require
- changes to <filename>gcc/glibc</filename> and to the site
- information, which is beyond the scope of this manual.
- </note>
- </para>
-
- <para>
- For a complete example that shows how to add a new machine,
- see the
- "<ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-bitbake-layers-script'>Creating a New BSP Layer Using the <filename>bitbake-layers</filename> Script</ulink>"
- section in the Yocto Project Board Support Package (BSP)
- Developer's Guide.
- </para>
-
- <section id="platdev-newmachine-conffile">
- <title>Adding the Machine Configuration File</title>
-
- <para>
- To add a new machine, you need to add a new machine
- configuration file to the layer's
- <filename>conf/machine</filename> directory.
- This configuration file provides details about the device
- you are adding.
- </para>
-
- <para>
- The OpenEmbedded build system uses the root name of the
- machine configuration file to reference the new machine.
- For example, given a machine configuration file named
- <filename>crownbay.conf</filename>, the build system
- recognizes the machine as "crownbay".
- </para>
-
- <para>
- The most important variables you must set in your machine
- configuration file or include from a lower-level configuration
- file are as follows:
- <itemizedlist>
- <listitem><para><filename><ulink url='&YOCTO_DOCS_REF_URL;#var-TARGET_ARCH'>TARGET_ARCH</ulink></filename>
- (e.g. "arm")</para></listitem>
- <listitem><para><filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PREFERRED_PROVIDER'>PREFERRED_PROVIDER</ulink>_virtual/kernel</filename>
- </para></listitem>
- <listitem><para><filename><ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE_FEATURES'>MACHINE_FEATURES</ulink></filename>
- (e.g. "apm screen wifi")</para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- You might also need these variables:
- <itemizedlist>
- <listitem><para><filename><ulink url='&YOCTO_DOCS_REF_URL;#var-SERIAL_CONSOLES'>SERIAL_CONSOLES</ulink></filename>
- (e.g. "115200;ttyS0 115200;ttyS1")</para></listitem>
- <listitem><para><filename><ulink url='&YOCTO_DOCS_REF_URL;#var-KERNEL_IMAGETYPE'>KERNEL_IMAGETYPE</ulink></filename>
- (e.g. "zImage")</para></listitem>
- <listitem><para><filename><ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FSTYPES'>IMAGE_FSTYPES</ulink></filename>
- (e.g. "tar.gz jffs2")</para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- You can find full details on these variables in the reference
- section.
- You can leverage existing machine <filename>.conf</filename>
- files from <filename>meta-yocto-bsp/conf/machine/</filename>.
- </para>
- </section>
-
- <section id="platdev-newmachine-kernel">
- <title>Adding a Kernel for the Machine</title>
-
- <para>
- The OpenEmbedded build system needs to be able to build a kernel
- for the machine.
- You need to either create a new kernel recipe for this machine,
- or extend an existing kernel recipe.
- You can find several kernel recipe examples in the
- Source Directory at
- <filename>meta/recipes-kernel/linux</filename>
- that you can use as references.
- </para>
-
- <para>
- If you are creating a new kernel recipe, normal recipe-writing
- rules apply for setting up a
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'>SRC_URI</ulink></filename>.
- Thus, you need to specify any necessary patches and set
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-S'>S</ulink></filename>
- to point at the source code.
- You need to create a <filename>do_configure</filename> task that
- configures the unpacked kernel with a
- <filename>defconfig</filename> file.
- You can do this by using a <filename>make defconfig</filename>
- command or, more commonly, by copying in a suitable
- <filename>defconfig</filename> file and then running
- <filename>make oldconfig</filename>.
- By making use of <filename>inherit kernel</filename> and
- potentially some of the <filename>linux-*.inc</filename> files,
- most other functionality is centralized and the defaults of the
- class normally work well.
- </para>
-
- <para>
- If you are extending an existing kernel recipe, it is usually
- a matter of adding a suitable <filename>defconfig</filename>
- file.
- The file needs to be added into a location similar to
- <filename>defconfig</filename> files used for other machines
- in a given kernel recipe.
- A possible way to do this is by listing the file in the
- <filename>SRC_URI</filename> and adding the machine to the
- expression in
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-COMPATIBLE_MACHINE'>COMPATIBLE_MACHINE</ulink></filename>:
- <literallayout class='monospaced'>
- COMPATIBLE_MACHINE = '(qemux86|qemumips)'
- </literallayout>
- For more information on <filename>defconfig</filename> files,
- see the
- "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#changing-the-configuration'>Changing the Configuration</ulink>"
- section in the Yocto Project Linux Kernel Development Manual.
- </para>
- </section>
-
- <section id="platdev-newmachine-formfactor">
- <title>Adding a Formfactor Configuration File</title>
-
- <para>
- A formfactor configuration file provides information about the
- target hardware for which the image is being built and information that
- the build system cannot obtain from other sources such as the kernel.
- Some examples of information contained in a formfactor configuration file include
- framebuffer orientation, whether or not the system has a keyboard,
- the positioning of the keyboard in relation to the screen, and
- the screen resolution.
- </para>
-
- <para>
- The build system uses reasonable defaults in most cases.
- However, if customization is
- necessary, you need to create a <filename>machconfig</filename> file
- in the <filename>meta/recipes-bsp/formfactor/files</filename>
- directory.
- This directory contains directories for specific machines such as
- <filename>qemuarm</filename> and <filename>qemux86</filename>.
- For information about the settings available and the defaults, see the
- <filename>meta/recipes-bsp/formfactor/files/config</filename> file found in the
- same area.
- </para>
-
- <para>
- Following is an example for "qemuarm" machine:
- <literallayout class='monospaced'>
- HAVE_TOUCHSCREEN=1
- HAVE_KEYBOARD=1
-
- DISPLAY_CAN_ROTATE=0
- DISPLAY_ORIENTATION=0
- #DISPLAY_WIDTH_PIXELS=640
- #DISPLAY_HEIGHT_PIXELS=480
- #DISPLAY_BPP=16
- DISPLAY_DPI=150
- DISPLAY_SUBPIXEL_ORDER=vrgb
- </literallayout>
- </para>
- </section>
- </section>
-
- <section id='gs-upgrading-recipes'>
- <title>Upgrading Recipes</title>
-
- <para>
- Over time, upstream developers publish new versions for software
- built by layer recipes.
- It is recommended to keep recipes up-to-date with upstream
- version releases.
- </para>
-
- <para>
- While several methods exist that allow you upgrade a recipe,
- you might consider checking on the upgrade status of a recipe
- first.
- You can do so using the
- <filename>devtool check-upgrade-status</filename> command.
- See the
- "<ulink url='&YOCTO_DOCS_REF_URL;#devtool-checking-on-the-upgrade-status-of-a-recipe'>Checking on the Upgrade Status of a Recipe</ulink>"
- section in the Yocto Project Reference Manual for more information.
- </para>
-
- <para>
- The remainder of this section describes three ways you can
- upgrade a recipe.
- You can use the Automated Upgrade Helper (AUH) to set up
- automatic version upgrades.
- Alternatively, you can use <filename>devtool upgrade</filename>
- to set up semi-automatic version upgrades.
- Finally, you can manually upgrade a recipe by editing the
- recipe itself.
- </para>
-
- <section id='gs-using-the-auto-upgrade-helper'>
- <title>Using the Auto Upgrade Helper (AUH)</title>
-
- <para>
- The AUH utility works in conjunction with the
- OpenEmbedded build system in order to automatically generate
- upgrades for recipes based on new versions being
- published upstream.
- Use AUH when you want to create a service that performs the
- upgrades automatically and optionally sends you an email with
- the results.
- </para>
-
- <para>
- AUH allows you to update several recipes with a single use.
- You can also optionally perform build and integration tests
- using images with the results saved to your hard drive and
- emails of results optionally sent to recipe maintainers.
- Finally, AUH creates Git commits with appropriate commit
- messages in the layer's tree for the changes made to recipes.
- <note>
- Conditions do exist when you should not use AUH to upgrade
- recipes and you should instead use either
- <filename>devtool upgrade</filename> or upgrade your
- recipes manually:
- <itemizedlist>
- <listitem><para>
- When AUH cannot complete the upgrade sequence.
- This situation usually results because custom
- patches carried by the recipe cannot be
- automatically rebased to the new version.
- In this case, <filename>devtool upgrade</filename>
- allows you to manually resolve conflicts.
- </para></listitem>
- <listitem><para>
- When for any reason you want fuller control over
- the upgrade process.
- For example, when you want special arrangements
- for testing.
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
-
- <para>
- The following steps describe how to set up the AUH utility:
- <orderedlist>
- <listitem><para>
- <emphasis>Be Sure the Development Host is Set Up:</emphasis>
- You need to be sure that your development host is
- set up to use the Yocto Project.
- For information on how to set up your host, see the
- "<link linkend='dev-preparing-the-build-host'>Preparing the Build Host</link>"
- section.
- </para></listitem>
- <listitem><para>
- <emphasis>Make Sure Git is Configured:</emphasis>
- The AUH utility requires Git to be configured because
- AUH uses Git to save upgrades.
- Thus, you must have Git user and email configured.
- The following command shows your configurations:
- <literallayout class='monospaced'>
- $ git config --list
- </literallayout>
- If you do not have the user and email configured, you
- can use the following commands to do so:
- <literallayout class='monospaced'>
- $ git config --global user.name <replaceable>some_name</replaceable>
- $ git config --global user.email <replaceable>username</replaceable>@<replaceable>domain</replaceable>.com
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Clone the AUH Repository:</emphasis>
- To use AUH, you must clone the repository onto your
- development host.
- The following command uses Git to create a local
- copy of the repository on your system:
- <literallayout class='monospaced'>
- $ git clone git://git.yoctoproject.org/auto-upgrade-helper
- Cloning into 'auto-upgrade-helper'...
- remote: Counting objects: 768, done.
- remote: Compressing objects: 100% (300/300), done.
- remote: Total 768 (delta 499), reused 703 (delta 434)
- Receiving objects: 100% (768/768), 191.47 KiB | 98.00 KiB/s, done.
- Resolving deltas: 100% (499/499), done.
- Checking connectivity... done.
- </literallayout>
- AUH is not part of the
- <ulink url='&YOCTO_DOCS_REF_URL;#oe-core'>OpenEmbedded-Core (OE-Core)</ulink>
- or
- <ulink url='&YOCTO_DOCS_REF_URL;#poky'>Poky</ulink>
- repositories.
- </para></listitem>
- <listitem><para>
- <emphasis>Create a Dedicated Build Directory:</emphasis>
- Run the
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>oe-init-build-env</filename></ulink>
- script to create a fresh build directory that you
- use exclusively for running the AUH utility:
- <literallayout class='monospaced'>
- $ cd ~/poky
- $ source oe-init-build-env <replaceable>your_AUH_build_directory</replaceable>
- </literallayout>
- Re-using an existing build directory and its
- configurations is not recommended as existing settings
- could cause AUH to fail or behave undesirably.
- </para></listitem>
- <listitem><para>
- <emphasis>Make Configurations in Your Local Configuration File:</emphasis>
- Several settings need to exist in the
- <filename>local.conf</filename> file in the build
- directory you just created for AUH.
- Make these following configurations:
- <itemizedlist>
- <listitem><para>
- If you want to enable
- <ulink url='&YOCTO_DOCS_DEV_URL;#maintaining-build-output-quality'>Build History</ulink>,
- which is optional, you need the following
- lines in the
- <filename>conf/local.conf</filename> file:
- <literallayout class='monospaced'>
- INHERIT =+ "buildhistory"
- BUILDHISTORY_COMMIT = "1"
- </literallayout>
- With this configuration and a successful
- upgrade, a build history "diff" file appears in
- the
- <filename>upgrade-helper/work/recipe/buildhistory-diff.txt</filename>
- file found in your build directory.
- </para></listitem>
- <listitem><para>
- If you want to enable testing through the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-testimage*'><filename>testimage</filename></ulink>
- class, which is optional, you need to have the
- following set in your
- <filename>conf/local.conf</filename> file:
- <literallayout class='monospaced'>
- INHERIT += "testimage"
- </literallayout>
- <note>
- If your distro does not enable by default
- ptest, which Poky does, you need the
- following in your
- <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- DISTRO_FEATURES_append = " ptest"
- </literallayout>
- </note>
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Optionally Start a vncserver:</emphasis>
- If you are running in a server without an X11 session,
- you need to start a vncserver:
- <literallayout class='monospaced'>
- $ vncserver :1
- $ export DISPLAY=:1
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Create and Edit an AUH Configuration File:</emphasis>
- You need to have the
- <filename>upgrade-helper/upgrade-helper.conf</filename>
- configuration file in your build directory.
- You can find a sample configuration file in the
- <ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/auto-upgrade-helper/tree/'>AUH source repository</ulink>.
- </para>
-
- <para>Read through the sample file and make
- configurations as needed.
- For example, if you enabled build history in your
- <filename>local.conf</filename> as described earlier,
- you must enable it in
- <filename>upgrade-helper.conf</filename>.</para>
-
- <para>Also, if you are using the default
- <filename>maintainers.inc</filename> file supplied
- with Poky and located in
- <filename>meta-yocto</filename> and you do not set a
- "maintainers_whitelist" or "global_maintainer_override"
- in the <filename>upgrade-helper.conf</filename>
- configuration, and you specify "-e all" on the
- AUH command-line, the utility automatically sends out
- emails to all the default maintainers.
- Please avoid this.
- </para></listitem>
- </orderedlist>
- </para>
-
- <para>
- This next set of examples describes how to use the AUH:
- <itemizedlist>
- <listitem><para>
- <emphasis>Upgrading a Specific Recipe:</emphasis>
- To upgrade a specific recipe, use the following
- form:
- <literallayout class='monospaced'>
- $ upgrade-helper.py <replaceable>recipe_name</replaceable>
- </literallayout>
- For example, this command upgrades the
- <filename>xmodmap</filename> recipe:
- <literallayout class='monospaced'>
- $ upgrade-helper.py xmodmap
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Upgrading a Specific Recipe to a Particular Version:</emphasis>
- To upgrade a specific recipe to a particular version,
- use the following form:
- <literallayout class='monospaced'>
- $ upgrade-helper.py <replaceable>recipe_name</replaceable> -t <replaceable>version</replaceable>
- </literallayout>
- For example, this command upgrades the
- <filename>xmodmap</filename> recipe to version
- 1.2.3:
- <literallayout class='monospaced'>
- $ upgrade-helper.py xmodmap -t 1.2.3
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Upgrading all Recipes to the Latest Versions and Suppressing Email Notifications:</emphasis>
- To upgrade all recipes to their most recent versions
- and suppress the email notifications, use the following
- command:
- <literallayout class='monospaced'>
- $ upgrade-helper.py all
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Upgrading all Recipes to the Latest Versions and Send Email Notifications:</emphasis>
- To upgrade all recipes to their most recent versions
- and send email messages to maintainers for each
- attempted recipe as well as a status email, use the
- following command:
- <literallayout class='monospaced'>
- $ upgrade-helper.py -e all
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Once you have run the AUH utility, you can find the results
- in the AUH build directory:
- <literallayout class='monospaced'>
- ${BUILDDIR}/upgrade-helper/<replaceable>timestamp</replaceable>
- </literallayout>
- The AUH utility also creates recipe update commits from
- successful upgrade attempts in the layer tree.
- </para>
-
- <para>
- You can easily set up to run the AUH utility on a regular
- basis by using a cron job.
- See the
- <ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/auto-upgrade-helper/tree/weeklyjob.sh'><filename>weeklyjob.sh</filename></ulink>
- file distributed with the utility for an example.
- </para>
- </section>
-
- <section id='gs-using-devtool-upgrade'>
- <title>Using <filename>devtool upgrade</filename></title>
-
- <para>
- As mentioned earlier, an alternative method for upgrading
- recipes to newer versions is to use
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-devtool-reference'><filename>devtool upgrade</filename></ulink>.
- You can read about <filename>devtool upgrade</filename> in
- general in the
- "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-devtool-use-devtool-upgrade-to-create-a-version-of-the-recipe-that-supports-a-newer-version-of-the-software'>Use <filename>devtool upgrade</filename> to Create a Version of the Recipe that Supports a Newer Version of the Software</ulink>"
- section in the Yocto Project Application Development and the
- Extensible Software Development Kit (eSDK) Manual.
- </para>
-
- <para>
- To see all the command-line options available with
- <filename>devtool upgrade</filename>, use the following help
- command:
- <literallayout class='monospaced'>
- $ devtool upgrade -h
- </literallayout>
- </para>
-
- <para>
- If you want to find out what version a recipe is currently at
- upstream without any attempt to upgrade your local version of
- the recipe, you can use the following command:
- <literallayout class='monospaced'>
- $ devtool latest-version <replaceable>recipe_name</replaceable>
- </literallayout>
- </para>
-
- <para>
- As mentioned in the previous section describing AUH,
- <filename>devtool upgrade</filename> works in a
- less-automated manner than AUH.
- Specifically, <filename>devtool upgrade</filename> only
- works on a single recipe that you name on the command line,
- cannot perform build and integration testing using images,
- and does not automatically generate commits for changes in
- the source tree.
- Despite all these "limitations",
- <filename>devtool upgrade</filename> updates the recipe file
- to the new upstream version and attempts to rebase custom
- patches contained by the recipe as needed.
- <note>
- AUH uses much of <filename>devtool upgrade</filename>
- behind the scenes making AUH somewhat of a "wrapper"
- application for <filename>devtool upgrade</filename>.
- </note>
- </para>
-
- <para>
- A typical scenario involves having used Git to clone an
- upstream repository that you use during build operations.
- Because you are (or have) built the recipe in the past, the
- layer is likely added to your configuration already.
- If for some reason, the layer is not added, you could add
- it easily using the
- <ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-bitbake-layers-script'><filename>bitbake-layers</filename></ulink>
- script.
- For example, suppose you use the <filename>nano.bb</filename>
- recipe from the <filename>meta-oe</filename> layer in the
- <filename>meta-openembedded</filename> repository.
- For this example, assume that the layer has been cloned into
- following area:
- <literallayout class='monospaced'>
- /home/scottrif/meta-openembedded
- </literallayout>
- The following command from your
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
- adds the layer to your build configuration (i.e.
- <filename>${BUILDDIR}/conf/bblayers.conf</filename>):
- <literallayout class='monospaced'>
- $ bitbake-layers add-layer /home/scottrif/meta-openembedded/meta-oe
- NOTE: Starting bitbake server...
- Parsing recipes: 100% |##########################################| Time: 0:00:55
- Parsing of 1431 .bb files complete (0 cached, 1431 parsed). 2040 targets, 56 skipped, 0 masked, 0 errors.
- Removing 12 recipes from the x86_64 sysroot: 100% |##############| Time: 0:00:00
- Removing 1 recipes from the x86_64_i586 sysroot: 100% |##########| Time: 0:00:00
- Removing 5 recipes from the i586 sysroot: 100% |#################| Time: 0:00:00
- Removing 5 recipes from the qemux86 sysroot: 100% |##############| Time: 0:00:00
- </literallayout>
- For this example, assume that the <filename>nano.bb</filename>
- recipe that is upstream has a 2.9.3 version number.
- However, the version in the local repository is 2.7.4.
- The following command from your build directory automatically
- upgrades the recipe for you:
- <note>
- Using the <filename>-V</filename> option is not necessary.
- Omitting the version number causes
- <filename>devtool upgrade</filename> to upgrade the recipe
- to the most recent version.
- </note>
- <literallayout class='monospaced'>
- $ devtool upgrade nano -V 2.9.3
- NOTE: Starting bitbake server...
- NOTE: Creating workspace layer in /home/scottrif/poky/build/workspace
- Parsing recipes: 100% |##########################################| Time: 0:00:46
- Parsing of 1431 .bb files complete (0 cached, 1431 parsed). 2040 targets, 56 skipped, 0 masked, 0 errors.
- NOTE: Extracting current version source...
- NOTE: Resolving any missing task queue dependencies
- .
- .
- .
- NOTE: Executing SetScene Tasks
- NOTE: Executing RunQueue Tasks
- NOTE: Tasks Summary: Attempted 74 tasks of which 72 didn't need to be rerun and all succeeded.
- Adding changed files: 100% |#####################################| Time: 0:00:00
- NOTE: Upgraded source extracted to /home/scottrif/poky/build/workspace/sources/nano
- NOTE: New recipe is /home/scottrif/poky/build/workspace/recipes/nano/nano_2.9.3.bb
- </literallayout>
- Continuing with this example, you can use
- <filename>devtool build</filename> to build the newly upgraded
- recipe:
- <literallayout class='monospaced'>
- $ devtool build nano
- NOTE: Starting bitbake server...
- Loading cache: 100% |################################################################################################| Time: 0:00:01
- Loaded 2040 entries from dependency cache.
- Parsing recipes: 100% |##############################################################################################| Time: 0:00:00
- Parsing of 1432 .bb files complete (1431 cached, 1 parsed). 2041 targets, 56 skipped, 0 masked, 0 errors.
- NOTE: Resolving any missing task queue dependencies
- .
- .
- .
- NOTE: Executing SetScene Tasks
- NOTE: Executing RunQueue Tasks
- NOTE: nano: compiling from external source tree /home/scottrif/poky/build/workspace/sources/nano
- NOTE: Tasks Summary: Attempted 520 tasks of which 304 didn't need to be rerun and all succeeded.
- </literallayout>
- Within the <filename>devtool upgrade</filename> workflow,
- opportunity exists to deploy and test your rebuilt software.
- For this example, however, running
- <filename>devtool finish</filename> cleans up the workspace
- once the source in your workspace is clean.
- This usually means using Git to stage and submit commits
- for the changes generated by the upgrade process.
- </para>
-
- <para>
- Once the tree is clean, you can clean things up in this
- example with the following command from the
- <filename>${BUILDDIR}/workspace/sources/nano</filename>
- directory:
- <literallayout class='monospaced'>
- $ devtool finish nano meta-oe
- NOTE: Starting bitbake server...
- Loading cache: 100% |################################################################################################| Time: 0:00:00
- Loaded 2040 entries from dependency cache.
- Parsing recipes: 100% |##############################################################################################| Time: 0:00:01
- Parsing of 1432 .bb files complete (1431 cached, 1 parsed). 2041 targets, 56 skipped, 0 masked, 0 errors.
- NOTE: Adding new patch 0001-nano.bb-Stuff-I-changed-when-upgrading-nano.bb.patch
- NOTE: Updating recipe nano_2.9.3.bb
- NOTE: Removing file /home/scottrif/meta-openembedded/meta-oe/recipes-support/nano/nano_2.7.4.bb
- NOTE: Moving recipe file to /home/scottrif/meta-openembedded/meta-oe/recipes-support/nano
- NOTE: Leaving source tree /home/scottrif/poky/build/workspace/sources/nano as-is; if you no longer need it then please delete it manually
- </literallayout>
- Using the <filename>devtool finish</filename> command cleans
- up the workspace and creates a patch file based on your
- commits.
- The tool puts all patch files back into the source directory
- in a sub-directory named <filename>nano</filename> in this
- case.
- </para>
- </section>
-
- <section id='dev-manually-upgrading-a-recipe'>
- <title>Manually Upgrading a Recipe</title>
-
- <para>
- If for some reason you choose not to upgrade recipes using the
- <link linkend='gs-using-the-auto-upgrade-helper'>Auto Upgrade Helper (AUH)</link>
- or by using
- <link linkend='gs-using-devtool-upgrade'><filename>devtool upgrade</filename></link>,
- you can manually edit the recipe files to upgrade the versions.
- <note><title>Caution</title>
- Manually updating multiple recipes scales poorly and
- involves many steps.
- The recommendation to upgrade recipe versions is through
- AUH or <filename>devtool upgrade</filename>, both of which
- automate some steps and provide guidance for others needed
- for the manual process.
- </note>
- </para>
-
- <para>
- To manually upgrade recipe versions, follow these general steps:
- <orderedlist>
- <listitem><para>
- <emphasis>Change the Version:</emphasis>
- Rename the recipe such that the version (i.e. the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>
- part of the recipe name) changes appropriately.
- If the version is not part of the recipe name, change
- the value as it is set for <filename>PV</filename>
- within the recipe itself.
- </para></listitem>
- <listitem><para>
- <emphasis>Update <filename>SRCREV</filename> if Needed:</emphasis>
- If the source code your recipe builds is fetched from
- Git or some other version control system, update
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCREV'><filename>SRCREV</filename></ulink>
- to point to the commit hash that matches the new
- version.
- </para></listitem>
- <listitem><para>
- <emphasis>Build the Software:</emphasis>
- Try to build the recipe using BitBake.
- Typical build failures include the following:
- <itemizedlist>
- <listitem><para>
- License statements were updated for the new
- version.
- For this case, you need to review any changes
- to the license and update the values of
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LICENSE'><filename>LICENSE</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LIC_FILES_CHKSUM'><filename>LIC_FILES_CHKSUM</filename></ulink>
- as needed.
- <note>
- License changes are often inconsequential.
- For example, the license text's copyright
- year might have changed.
- </note>
- </para></listitem>
- <listitem><para>
- Custom patches carried by the older version of
- the recipe might fail to apply to the new
- version.
- For these cases, you need to review the
- failures.
- Patches might not be necessary for the new
- version of the software if the upgraded version
- has fixed those issues.
- If a patch is necessary and failing, you need
- to rebase it into the new version.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Optionally Attempt to Build for Several Architectures:</emphasis>
- Once you successfully build the new software for a
- given architecture, you could test the build for
- other architectures by changing the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
- variable and rebuilding the software.
- This optional step is especially important if the
- recipe is to be released publicly.
- </para></listitem>
- <listitem><para>
- <emphasis>Check the Upstream Change Log or Release Notes:</emphasis>
- Checking both these reveals if new features exist that
- could break backwards-compatibility.
- If so, you need to take steps to mitigate or eliminate
- that situation.
- </para></listitem>
- <listitem><para>
- <emphasis>Optionally Create a Bootable Image and Test:</emphasis>
- If you want, you can test the new software by booting
- it onto actual hardware.
- </para></listitem>
- <listitem><para>
- <emphasis>Create a Commit with the Change in the Layer Repository:</emphasis>
- After all builds work and any testing is successful,
- you can create commits for any changes in the layer
- holding your upgraded recipe.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
- </section>
-
- <section id='finding-the-temporary-source-code'>
- <title>Finding Temporary Source Code</title>
-
- <para>
- You might find it helpful during development to modify the
- temporary source code used by recipes to build packages.
- For example, suppose you are developing a patch and you need to
- experiment a bit to figure out your solution.
- After you have initially built the package, you can iteratively
- tweak the source code, which is located in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
- and then you can force a re-compile and quickly test your altered
- code.
- Once you settle on a solution, you can then preserve your changes
- in the form of patches.
- </para>
-
- <para>
- During a build, the unpacked temporary source code used by recipes
- to build packages is available in the Build Directory as
- defined by the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink>
- variable.
- Below is the default value for the <filename>S</filename> variable
- as defined in the
- <filename>meta/conf/bitbake.conf</filename> configuration file
- in the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>:
- <literallayout class='monospaced'>
- S = "${WORKDIR}/${BP}"
- </literallayout>
- You should be aware that many recipes override the
- <filename>S</filename> variable.
- For example, recipes that fetch their source from Git usually set
- <filename>S</filename> to <filename>${WORKDIR}/git</filename>.
- <note>
- The
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BP'><filename>BP</filename></ulink>
- represents the base recipe name, which consists of the name
- and version:
- <literallayout class='monospaced'>
- BP = "${BPN}-${PV}"
- </literallayout>
- </note>
- </para>
-
- <para>
- The path to the work directory for the recipe
- (<ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink>)
- is defined as follows:
- <literallayout class='monospaced'>
- ${TMPDIR}/work/${MULTIMACH_TARGET_SYS}/${PN}/${EXTENDPE}${PV}-${PR}
- </literallayout>
- The actual directory depends on several things:
- <itemizedlist>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>:
- The top-level build output directory.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MULTIMACH_TARGET_SYS'><filename>MULTIMACH_TARGET_SYS</filename></ulink>:
- The target system identifier.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink>:
- The recipe name.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTENDPE'><filename>EXTENDPE</filename></ulink>:
- The epoch - (if
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PE'><filename>PE</filename></ulink>
- is not specified, which is usually the case for most
- recipes, then <filename>EXTENDPE</filename> is blank).
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>:
- The recipe version.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>:
- The recipe revision.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- As an example, assume a Source Directory top-level folder
- named <filename>poky</filename>, a default Build Directory at
- <filename>poky/build</filename>, and a
- <filename>qemux86-poky-linux</filename> machine target
- system.
- Furthermore, suppose your recipe is named
- <filename>foo_1.3.0.bb</filename>.
- In this case, the work directory the build system uses to
- build the package would be as follows:
- <literallayout class='monospaced'>
- poky/build/tmp/work/qemux86-poky-linux/foo/1.3.0-r0
- </literallayout>
- </para>
- </section>
-
- <section id="using-a-quilt-workflow">
- <title>Using Quilt in Your Workflow</title>
-
- <para>
- <ulink url='http://savannah.nongnu.org/projects/quilt'>Quilt</ulink>
- is a powerful tool that allows you to capture source code changes
- without having a clean source tree.
- This section outlines the typical workflow you can use to modify
- source code, test changes, and then preserve the changes in the
- form of a patch all using Quilt.
- <note><title>Tip</title>
- With regard to preserving changes to source files, if you
- clean a recipe or have <filename>rm_work</filename> enabled,
- the
- <ulink url='&YOCTO_DOCS_SDK_URL;#using-devtool-in-your-sdk-workflow'><filename>devtool</filename> workflow</ulink>
- as described in the Yocto Project Application Development
- and the Extensible Software Development Kit (eSDK) manual
- is a safer development flow than the flow that uses Quilt.
- </note>
- </para>
-
- <para>
- Follow these general steps:
- <orderedlist>
- <listitem><para>
- <emphasis>Find the Source Code:</emphasis>
- Temporary source code used by the OpenEmbedded build system
- is kept in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- See the
- "<link linkend='finding-the-temporary-source-code'>Finding Temporary Source Code</link>"
- section to learn how to locate the directory that has the
- temporary source code for a particular package.
- </para></listitem>
- <listitem><para>
- <emphasis>Change Your Working Directory:</emphasis>
- You need to be in the directory that has the temporary
- source code.
- That directory is defined by the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink>
- variable.</para></listitem>
- <listitem><para>
- <emphasis>Create a New Patch:</emphasis>
- Before modifying source code, you need to create a new
- patch.
- To create a new patch file, use
- <filename>quilt new</filename> as below:
- <literallayout class='monospaced'>
- $ quilt new my_changes.patch
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Notify Quilt and Add Files:</emphasis>
- After creating the patch, you need to notify Quilt about
- the files you plan to edit.
- You notify Quilt by adding the files to the patch you
- just created:
- <literallayout class='monospaced'>
- $ quilt add file1.c file2.c file3.c
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Edit the Files:</emphasis>
- Make your changes in the source code to the files you added
- to the patch.
- </para></listitem>
- <listitem><para>
- <emphasis>Test Your Changes:</emphasis>
- Once you have modified the source code, the easiest way to
- test your changes is by calling the
- <filename>do_compile</filename> task as shown in the
- following example:
- <literallayout class='monospaced'>
- $ bitbake -c compile -f <replaceable>package</replaceable>
- </literallayout>
- The <filename>-f</filename> or <filename>--force</filename>
- option forces the specified task to execute.
- If you find problems with your code, you can just keep
- editing and re-testing iteratively until things work
- as expected.
- <note>
- All the modifications you make to the temporary
- source code disappear once you run the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-clean'><filename>do_clean</filename></ulink>
- or
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-cleanall'><filename>do_cleanall</filename></ulink>
- tasks using BitBake (i.e.
- <filename>bitbake -c clean <replaceable>package</replaceable></filename>
- and
- <filename>bitbake -c cleanall <replaceable>package</replaceable></filename>).
- Modifications will also disappear if you use the
- <filename>rm_work</filename> feature as described
- in the
- "<ulink url='&YOCTO_DOCS_DEV_URL;#dev-saving-memory-during-a-build'>Conserving Disk Space During Builds</ulink>"
- section.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Generate the Patch:</emphasis>
- Once your changes work as expected, you need to use Quilt
- to generate the final patch that contains all your
- modifications.
- <literallayout class='monospaced'>
- $ quilt refresh
- </literallayout>
- At this point, the <filename>my_changes.patch</filename>
- file has all your edits made to the
- <filename>file1.c</filename>, <filename>file2.c</filename>,
- and <filename>file3.c</filename> files.</para>
-
- <para>You can find the resulting patch file in the
- <filename>patches/</filename> subdirectory of the source
- (<filename>S</filename>) directory.
- </para></listitem>
- <listitem><para>
- <emphasis>Copy the Patch File:</emphasis>
- For simplicity, copy the patch file into a directory
- named <filename>files</filename>, which you can create
- in the same directory that holds the recipe
- (<filename>.bb</filename>) file or the append
- (<filename>.bbappend</filename>) file.
- Placing the patch here guarantees that the OpenEmbedded
- build system will find the patch.
- Next, add the patch into the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'>SRC_URI</ulink></filename>
- of the recipe.
- Here is an example:
- <literallayout class='monospaced'>
- SRC_URI += "file://my_changes.patch"
- </literallayout>
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id="platdev-appdev-devshell">
- <title>Using a Development Shell</title>
-
- <para>
- When debugging certain commands or even when just editing packages,
- <filename>devshell</filename> can be a useful tool.
- When you invoke <filename>devshell</filename>, all tasks up to and
- including
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-patch'><filename>do_patch</filename></ulink>
- are run for the specified target.
- Then, a new terminal is opened and you are placed in
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink><filename>}</filename>,
- the source directory.
- In the new terminal, all the OpenEmbedded build-related environment variables are
- still defined so you can use commands such as <filename>configure</filename> and
- <filename>make</filename>.
- The commands execute just as if the OpenEmbedded build system were executing them.
- Consequently, working this way can be helpful when debugging a build or preparing
- software to be used with the OpenEmbedded build system.
- </para>
-
- <para>
- Following is an example that uses <filename>devshell</filename> on a target named
- <filename>matchbox-desktop</filename>:
- <literallayout class='monospaced'>
- $ bitbake matchbox-desktop -c devshell
- </literallayout>
- </para>
-
- <para>
- This command spawns a terminal with a shell prompt within the OpenEmbedded build environment.
- The <ulink url='&YOCTO_DOCS_REF_URL;#var-OE_TERMINAL'><filename>OE_TERMINAL</filename></ulink>
- variable controls what type of shell is opened.
- </para>
-
- <para>
- For spawned terminals, the following occurs:
- <itemizedlist>
- <listitem><para>The <filename>PATH</filename> variable includes the
- cross-toolchain.</para></listitem>
- <listitem><para>The <filename>pkgconfig</filename> variables find the correct
- <filename>.pc</filename> files.</para></listitem>
- <listitem><para>The <filename>configure</filename> command finds the
- Yocto Project site files as well as any other necessary files.</para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Within this environment, you can run configure or compile
- commands as if they were being run by
- the OpenEmbedded build system itself.
- As noted earlier, the working directory also automatically changes to the
- Source Directory (<ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink>).
- </para>
-
- <para>
- To manually run a specific task using <filename>devshell</filename>,
- run the corresponding <filename>run.*</filename> script in
- the
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}/temp</filename>
- directory (e.g.,
- <filename>run.do_configure.</filename><replaceable>pid</replaceable>).
- If a task's script does not exist, which would be the case if the task was
- skipped by way of the sstate cache, you can create the task by first running
- it outside of the <filename>devshell</filename>:
- <literallayout class='monospaced'>
- $ bitbake -c <replaceable>task</replaceable>
- </literallayout>
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>Execution of a task's <filename>run.*</filename>
- script and BitBake's execution of a task are identical.
- In other words, running the script re-runs the task
- just as it would be run using the
- <filename>bitbake -c</filename> command.
- </para></listitem>
- <listitem><para>Any <filename>run.*</filename> file that does not
- have a <filename>.pid</filename> extension is a
- symbolic link (symlink) to the most recent version of that
- file.
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
-
- <para>
- Remember, that the <filename>devshell</filename> is a mechanism that allows
- you to get into the BitBake task execution environment.
- And as such, all commands must be called just as BitBake would call them.
- That means you need to provide the appropriate options for
- cross-compilation and so forth as applicable.
- </para>
-
- <para>
- When you are finished using <filename>devshell</filename>, exit the shell
- or close the terminal window.
- </para>
-
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- It is worth remembering that when using <filename>devshell</filename>
- you need to use the full compiler name such as <filename>arm-poky-linux-gnueabi-gcc</filename>
- instead of just using <filename>gcc</filename>.
- The same applies to other applications such as <filename>binutils</filename>,
- <filename>libtool</filename> and so forth.
- BitBake sets up environment variables such as <filename>CC</filename>
- to assist applications, such as <filename>make</filename> to find the correct tools.
- </para></listitem>
- <listitem><para>
- It is also worth noting that <filename>devshell</filename> still works over
- X11 forwarding and similar situations.
- </para></listitem>
- </itemizedlist>
- </note>
- </section>
-
- <section id="platdev-appdev-devpyshell">
- <title>Using a Development Python Shell</title>
-
- <para>
- Similar to working within a development shell as described in
- the previous section, you can also spawn and work within an
- interactive Python development shell.
- When debugging certain commands or even when just editing packages,
- <filename>devpyshell</filename> can be a useful tool.
- When you invoke <filename>devpyshell</filename>, all tasks up to and
- including
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-patch'><filename>do_patch</filename></ulink>
- are run for the specified target.
- Then a new terminal is opened.
- Additionally, key Python objects and code are available in the same
- way they are to BitBake tasks, in particular, the data store 'd'.
- So, commands such as the following are useful when exploring the data
- store and running functions:
- <literallayout class='monospaced'>
- pydevshell> d.getVar("STAGING_DIR")
- '/media/build1/poky/build/tmp/sysroots'
- pydevshell> d.getVar("STAGING_DIR")
- '${TMPDIR}/sysroots'
- pydevshell> d.setVar("FOO", "bar")
- pydevshell> d.getVar("FOO")
- 'bar'
- pydevshell> d.delVar("FOO")
- pydevshell> d.getVar("FOO")
- pydevshell> bb.build.exec_func("do_unpack", d)
- pydevshell>
- </literallayout>
- The commands execute just as if the OpenEmbedded build system were executing them.
- Consequently, working this way can be helpful when debugging a build or preparing
- software to be used with the OpenEmbedded build system.
- </para>
-
- <para>
- Following is an example that uses <filename>devpyshell</filename> on a target named
- <filename>matchbox-desktop</filename>:
- <literallayout class='monospaced'>
- $ bitbake matchbox-desktop -c devpyshell
- </literallayout>
- </para>
-
- <para>
- This command spawns a terminal and places you in an interactive
- Python interpreter within the OpenEmbedded build environment.
- The <ulink url='&YOCTO_DOCS_REF_URL;#var-OE_TERMINAL'><filename>OE_TERMINAL</filename></ulink>
- variable controls what type of shell is opened.
- </para>
-
- <para>
- When you are finished using <filename>devpyshell</filename>, you
- can exit the shell either by using Ctrl+d or closing the terminal
- window.
- </para>
- </section>
-
- <section id='dev-building'>
- <title>Building</title>
-
- <para>
- This section describes various build procedures.
- For example, the steps needed for a simple build, a target that
- uses multiple configurations, building an image for more than
- one machine, and so forth.
- </para>
-
- <section id='dev-building-a-simple-image'>
- <title>Building a Simple Image</title>
-
- <para>
- In the development environment, you need to build an image
- whenever you change hardware support, add or change system
- libraries, or add or change services that have dependencies.
- Several methods exist that allow you to build an image within
- the Yocto Project.
- This section presents the basic steps you need to build a
- simple image using BitBake from a build host running Linux.
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- For information on how to build an image using
- <ulink url='&YOCTO_DOCS_REF_URL;#toaster-term'>Toaster</ulink>,
- see the
- <ulink url='&YOCTO_DOCS_TOAST_URL;'>Toaster User Manual</ulink>.
- </para></listitem>
- <listitem><para>
- For information on how to use
- <filename>devtool</filename> to build images, see
- the
- "<ulink url='&YOCTO_DOCS_SDK_URL;#using-devtool-in-your-sdk-workflow'>Using <filename>devtool</filename> in Your SDK Workflow</ulink>"
- section in the Yocto Project Application
- Development and the Extensible Software Development
- Kit (eSDK) manual.
- </para></listitem>
- <listitem><para>
- For a quick example on how to build an image using
- the OpenEmbedded build system, see the
- <ulink url='&YOCTO_DOCS_BRIEF_URL;'>Yocto Project Quick Build</ulink>
- document.
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
-
- <para>
- The build process creates an entire Linux distribution from
- source and places it in your
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
- under <filename>tmp/deploy/images</filename>.
- For detailed information on the build process using BitBake,
- see the
- "<ulink url='&YOCTO_DOCS_OM_URL;#images-dev-environment'>Images</ulink>"
- section in the Yocto Project Overview and Concepts Manual.
- </para>
-
- <para>
- The following figure and list overviews the build process:
- <imagedata fileref="figures/bitbake-build-flow.png" width="7in" depth="4in" align="center" scalefit="1" />
- <orderedlist>
- <listitem><para>
- <emphasis>Set up Your Host Development System to Support
- Development Using the Yocto Project</emphasis>:
- See the
- "<link linkend='dev-manual-start'>Setting Up to Use the Yocto Project</link>"
- section for options on how to get a build host ready to
- use the Yocto Project.
- </para></listitem>
- <listitem><para>
- <emphasis>Initialize the Build Environment:</emphasis>
- Initialize the build environment by sourcing the build
- environment script (i.e.
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>):
- <literallayout class='monospaced'>
- $ source &OE_INIT_FILE; [<replaceable>build_dir</replaceable>]
- </literallayout></para>
-
- <para>When you use the initialization script, the
- OpenEmbedded build system uses
- <filename>build</filename> as the default Build
- Directory in your current work directory.
- You can use a <replaceable>build_dir</replaceable>
- argument with the script to specify a different build
- directory.
- <note><title>Tip</title>
- A common practice is to use a different Build
- Directory for different targets.
- For example, <filename>~/build/x86</filename> for a
- <filename>qemux86</filename> target, and
- <filename>~/build/arm</filename> for a
- <filename>qemuarm</filename> target.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Make Sure Your <filename>local.conf</filename>
- File is Correct:</emphasis>
- Ensure the <filename>conf/local.conf</filename>
- configuration file, which is found in the Build
- Directory, is set up how you want it.
- This file defines many aspects of the build environment
- including the target machine architecture through the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'>MACHINE</ulink></filename> variable,
- the packaging format used during the build
- (<ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></ulink>),
- and a centralized tarball download directory through the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DL_DIR'><filename>DL_DIR</filename></ulink> variable.
- </para></listitem>
- <listitem><para>
- <emphasis>Build the Image:</emphasis>
- Build the image using the <filename>bitbake</filename>
- command:
- <literallayout class='monospaced'>
- $ bitbake <replaceable>target</replaceable>
- </literallayout>
- <note>
- For information on BitBake, see the
- <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
- </note>
- The <replaceable>target</replaceable> is the name of the
- recipe you want to build.
- Common targets are the images in
- <filename>meta/recipes-core/images</filename>,
- <filename>meta/recipes-sato/images</filename>, and so
- forth all found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
- Or, the target can be the name of a recipe for a
- specific piece of software such as BusyBox.
- For more details about the images the OpenEmbedded build
- system supports, see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>"
- chapter in the Yocto Project Reference Manual.</para>
-
- <para>As an example, the following command builds the
- <filename>core-image-minimal</filename> image:
- <literallayout class='monospaced'>
- $ bitbake core-image-minimal
- </literallayout>
- Once an image has been built, it often needs to be
- installed.
- The images and kernels built by the OpenEmbedded
- build system are placed in the Build Directory in
- <filename class="directory">tmp/deploy/images</filename>.
- For information on how to run pre-built images such as
- <filename>qemux86</filename> and <filename>qemuarm</filename>,
- see the
- <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
- manual.
- For information about how to install these images,
- see the documentation for your particular board or
- machine.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='dev-building-images-for-multiple-targets-using-multiple-configurations'>
- <title>Building Images for Multiple Targets Using Multiple Configurations</title>
-
- <para>
- You can use a single <filename>bitbake</filename> command
- to build multiple images or packages for different targets
- where each image or package requires a different configuration
- (multiple configuration builds).
- The builds, in this scenario, are sometimes referred to as
- "multiconfigs", and this section uses that term throughout.
- </para>
-
- <para>
- This section describes how to set up for multiple
- configuration builds and how to account for cross-build
- dependencies between the multiconfigs.
- </para>
-
- <section id='dev-setting-up-and-running-a-multiple-configuration-build'>
- <title>Setting Up and Running a Multiple Configuration Build</title>
-
- <para>
- To accomplish a multiple configuration build, you must
- define each target's configuration separately using
- a parallel configuration file in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
- and you must follow a required file hierarchy.
- Additionally, you must enable the multiple configuration
- builds in your <filename>local.conf</filename> file.
- </para>
-
- <para>
- Follow these steps to set up and execute multiple
- configuration builds:
- <itemizedlist>
- <listitem><para>
- <emphasis>Create Separate Configuration Files</emphasis>:
- You need to create a single configuration file for
- each build target (each multiconfig).
- Minimally, each configuration file must define the
- machine and the temporary directory BitBake uses
- for the build.
- Suggested practice dictates that you do not
- overlap the temporary directories
- used during the builds.
- However, it is possible that you can share the
- temporary directory
- (<ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>).
- For example, consider a scenario with two
- different multiconfigs for the same
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>: "qemux86" built for
- two distributions such as "poky" and "poky-lsb".
- In this case, you might want to use the same
- <filename>TMPDIR</filename>.</para>
-
- <para>Here is an example showing the minimal
- statements needed in a configuration file for
- a "qemux86" target whose temporary build directory
- is <filename>tmpmultix86</filename>:
- <literallayout class='monospaced'>
- MACHINE="qemux86"
- TMPDIR="${TOPDIR}/tmpmultix86"
- </literallayout></para>
-
- <para>The location for these multiconfig
- configuration files is specific.
- They must reside in the current build directory in
- a sub-directory of <filename>conf</filename> named
- <filename>multiconfig</filename>.
- Following is an example that defines two
- configuration files for the "x86" and "arm"
- multiconfigs:
- <imagedata fileref="figures/multiconfig_files.png" align="center" width="4in" depth="3in" />
- </para>
-
- <para>The reason for this required file hierarchy
- is because the <filename>BBPATH</filename> variable
- is not constructed until the layers are parsed.
- Consequently, using the configuration file as a
- pre-configuration file is not possible unless it is
- located in the current working directory.
- </para></listitem>
- <listitem><para>
- <emphasis>Add the BitBake Multi-configuration Variable to the Local Configuration File</emphasis>:
- Use the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBMULTICONFIG'><filename>BBMULTICONFIG</filename></ulink>
- variable in your
- <filename>conf/local.conf</filename> configuration
- file to specify each multiconfig.
- Continuing with the example from the previous
- figure, the <filename>BBMULTICONFIG</filename>
- variable needs to enable two multiconfigs: "x86"
- and "arm" by specifying each configuration file:
- <literallayout class='monospaced'>
- BBMULTICONFIG = "x86 arm"
- </literallayout>
- <note>
- A "default" configuration already exists by
- definition.
- This configuration is named: "" (i.e. empty
- string) and is defined by the variables coming
- from your <filename>local.conf</filename> file.
- Consequently, the previous example actually
- adds two additional configurations to your
- build: "arm" and "x86" along with "".
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Launch BitBake</emphasis>:
- Use the following BitBake command form to launch the
- multiple configuration build:
- <literallayout class='monospaced'>
- $ bitbake [mc:<replaceable>multiconfigname</replaceable>:]<replaceable>target</replaceable> [[[mc:<replaceable>multiconfigname</replaceable>:]<replaceable>target</replaceable>] ... ]
- </literallayout>
- For the example in this section, the following
- command applies:
- <literallayout class='monospaced'>
- $ bitbake mc:x86:core-image-minimal mc:arm:core-image-sato mc::core-image-base
- </literallayout>
- The previous BitBake command builds a
- <filename>core-image-minimal</filename> image that
- is configured through the
- <filename>x86.conf</filename> configuration file,
- a <filename>core-image-sato</filename>
- image that is configured through the
- <filename>arm.conf</filename> configuration file
- and a <filename>core-image-base</filename> that is
- configured through your
- <filename>local.conf</filename> configuration file.
- </para></listitem>
- </itemizedlist>
- <note>
- Support for multiple configuration builds in the
- Yocto Project &DISTRO; (&DISTRO_NAME;) Release does
- not include Shared State (sstate) optimizations.
- Consequently, if a build uses the same object twice
- in, for example, two different
- <filename>TMPDIR</filename> directories, the build
- either loads from an existing sstate cache for that
- build at the start or builds the object fresh.
- </note>
- </para>
- </section>
-
- <section id='dev-enabling-multiple-configuration-build-dependencies'>
- <title>Enabling Multiple Configuration Build Dependencies</title>
-
- <para>
- Sometimes dependencies can exist between targets
- (multiconfigs) in a multiple configuration build.
- For example, suppose that in order to build a
- <filename>core-image-sato</filename> image for an "x86"
- multiconfig, the root filesystem of an "arm"
- multiconfig must exist.
- This dependency is essentially that the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-image'><filename>do_image</filename></ulink>
- task in the <filename>core-image-sato</filename> recipe
- depends on the completion of the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-rootfs'><filename>do_rootfs</filename></ulink>
- task of the <filename>core-image-minimal</filename>
- recipe.
- </para>
-
- <para>
- To enable dependencies in a multiple configuration
- build, you must declare the dependencies in the recipe
- using the following statement form:
- <literallayout class='monospaced'>
- <replaceable>task_or_package</replaceable>[mcdepends] = "mc:<replaceable>from_multiconfig</replaceable>:<replaceable>to_multiconfig</replaceable>:<replaceable>recipe_name</replaceable>:<replaceable>task_on_which_to_depend</replaceable>"
- </literallayout>
- To better show how to use this statement, consider the
- example scenario from the first paragraph of this section.
- The following statement needs to be added to the recipe
- that builds the <filename>core-image-sato</filename>
- image:
- <literallayout class='monospaced'>
- do_image[mcdepends] = "mc:x86:arm:core-image-minimal:do_rootfs"
- </literallayout>
- In this example, the
- <replaceable>from_multiconfig</replaceable> is "x86".
- The <replaceable>to_multiconfig</replaceable> is "arm".
- The task on which the <filename>do_image</filename> task
- in the recipe depends is the <filename>do_rootfs</filename>
- task from the <filename>core-image-minimal</filename>
- recipe associated with the "arm" multiconfig.
- </para>
-
- <para>
- Once you set up this dependency, you can build the
- "x86" multiconfig using a BitBake command as follows:
- <literallayout class='monospaced'>
- $ bitbake mc:x86:core-image-sato
- </literallayout>
- This command executes all the tasks needed to create
- the <filename>core-image-sato</filename> image for the
- "x86" multiconfig.
- Because of the dependency, BitBake also executes through
- the <filename>do_rootfs</filename> task for the "arm"
- multiconfig build.
- </para>
-
- <para>
- Having a recipe depend on the root filesystem of another
- build might not seem that useful.
- Consider this change to the statement in the
- <filename>core-image-sato</filename> recipe:
- <literallayout class='monospaced'>
- do_image[mcdepends] = "mc:x86:arm:core-image-minimal:do_image"
- </literallayout>
- In this case, BitBake must create the
- <filename>core-image-minimal</filename> image for the
- "arm" build since the "x86" build depends on it.
- </para>
-
- <para>
- Because "x86" and "arm" are enabled for multiple
- configuration builds and have separate configuration
- files, BitBake places the artifacts for each build in the
- respective temporary build directories (i.e.
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>).
- </para>
- </section>
- </section>
-
- <section id='building-an-initramfs-image'>
- <title>Building an Initial RAM Filesystem (initramfs) Image</title>
-
- <para>
- An initial RAM filesystem (initramfs) image provides a temporary
- root filesystem used for early system initialization (e.g.
- loading of modules needed to locate and mount the "real" root
- filesystem).
- <note>
- The initramfs image is the successor of initial RAM disk
- (initrd).
- It is a "copy in and out" (cpio) archive of the initial
- filesystem that gets loaded into memory during the Linux
- startup process.
- Because Linux uses the contents of the archive during
- initialization, the initramfs image needs to contain all of the
- device drivers and tools needed to mount the final root
- filesystem.
- </note>
- </para>
-
- <para>
- Follow these steps to create an initramfs image:
- <orderedlist>
- <listitem><para>
- <emphasis>Create the initramfs Image Recipe:</emphasis>
- You can reference the
- <filename>core-image-minimal-initramfs.bb</filename>
- recipe found in the <filename>meta/recipes-core</filename>
- directory of the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- as an example from which to work.
- </para></listitem>
- <listitem><para>
- <emphasis>Decide if You Need to Bundle the initramfs Image
- Into the Kernel Image:</emphasis>
- If you want the initramfs image that is built to be
- bundled in with the kernel image, set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRAMFS_IMAGE_BUNDLE'><filename>INITRAMFS_IMAGE_BUNDLE</filename></ulink>
- variable to "1" in your <filename>local.conf</filename>
- configuration file and set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRAMFS_IMAGE'><filename>INITRAMFS_IMAGE</filename></ulink>
- variable in the recipe that builds the kernel image.
- <note><title>Tip</title>
- It is recommended that you do bundle the initramfs
- image with the kernel image to avoid circular
- dependencies between the kernel recipe and the
- initramfs recipe should the initramfs image
- include kernel modules.
- </note>
- Setting the <filename>INITRAMFS_IMAGE_BUNDLE</filename>
- flag causes the initramfs image to be unpacked
- into the <filename>${B}/usr/</filename> directory.
- The unpacked initramfs image is then passed to the kernel's
- <filename>Makefile</filename> using the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-CONFIG_INITRAMFS_SOURCE'><filename>CONFIG_INITRAMFS_SOURCE</filename></ulink>
- variable, allowing the initramfs image to be built into
- the kernel normally.
- <note>
- If you choose to not bundle the initramfs image with
- the kernel image, you are essentially using an
- <ulink url='https://en.wikipedia.org/wiki/Initrd'>Initial RAM Disk (initrd)</ulink>.
- Creating an initrd is handled primarily through the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRD_IMAGE'><filename>INITRD_IMAGE</filename></ulink>,
- <filename>INITRD_LIVE</filename>, and
- <filename>INITRD_IMAGE_LIVE</filename> variables.
- For more information, see the
- <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/meta/classes/image-live.bbclass'><filename>image-live.bbclass</filename></ulink>
- file.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Optionally Add Items to the initramfs Image
- Through the initramfs Image Recipe:</emphasis>
- If you add items to the initramfs image by way of its
- recipe, you should use
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_INSTALL'><filename>PACKAGE_INSTALL</filename></ulink>
- rather than
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'><filename>IMAGE_INSTALL</filename></ulink>.
- <filename>PACKAGE_INSTALL</filename> gives more direct
- control of what is added to the image as compared to
- the defaults you might not necessarily want that are
- set by the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-image'><filename>image</filename></ulink>
- or
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-core-image'><filename>core-image</filename></ulink>
- classes.
- </para></listitem>
- <listitem><para>
- <emphasis>Build the Kernel Image and the initramfs
- Image:</emphasis>
- Build your kernel image using BitBake.
- Because the initramfs image recipe is a dependency of the
- kernel image, the initramfs image is built as well and
- bundled with the kernel image if you used the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRAMFS_IMAGE_BUNDLE'><filename>INITRAMFS_IMAGE_BUNDLE</filename></ulink>
- variable described earlier.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='building-a-tiny-system'>
- <title>Building a Tiny System</title>
-
- <para>
- Very small distributions have some significant advantages such
- as requiring less on-die or in-package memory (cheaper), better
- performance through efficient cache usage, lower power requirements
- due to less memory, faster boot times, and reduced development
- overhead.
- Some real-world examples where a very small distribution gives
- you distinct advantages are digital cameras, medical devices,
- and small headless systems.
- </para>
-
- <para>
- This section presents information that shows you how you can
- trim your distribution to even smaller sizes than the
- <filename>poky-tiny</filename> distribution, which is around
- 5 Mbytes, that can be built out-of-the-box using the Yocto Project.
- </para>
-
- <section id='tiny-system-overview'>
- <title>Overview</title>
-
- <para>
- The following list presents the overall steps you need to
- consider and perform to create distributions with smaller
- root filesystems, achieve faster boot times, maintain your critical
- functionality, and avoid initial RAM disks:
- <itemizedlist>
- <listitem><para>
- <link linkend='goals-and-guiding-principles'>Determine your goals and guiding principles.</link>
- </para></listitem>
- <listitem><para>
- <link linkend='understand-what-gives-your-image-size'>Understand what contributes to your image size.</link>
- </para></listitem>
- <listitem><para>
- <link linkend='trim-the-root-filesystem'>Reduce the size of the root filesystem.</link>
- </para></listitem>
- <listitem><para>
- <link linkend='trim-the-kernel'>Reduce the size of the kernel.</link>
- </para></listitem>
- <listitem><para>
- <link linkend='remove-package-management-requirements'>Eliminate packaging requirements.</link>
- </para></listitem>
- <listitem><para>
- <link linkend='look-for-other-ways-to-minimize-size'>Look for other ways to minimize size.</link>
- </para></listitem>
- <listitem><para>
- <link linkend='iterate-on-the-process'>Iterate on the process.</link>
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='goals-and-guiding-principles'>
- <title>Goals and Guiding Principles</title>
-
- <para>
- Before you can reach your destination, you need to know
- where you are going.
- Here is an example list that you can use as a guide when
- creating very small distributions:
- <itemizedlist>
- <listitem><para>Determine how much space you need
- (e.g. a kernel that is 1 Mbyte or less and
- a root filesystem that is 3 Mbytes or less).
- </para></listitem>
- <listitem><para>Find the areas that are currently
- taking 90% of the space and concentrate on reducing
- those areas.
- </para></listitem>
- <listitem><para>Do not create any difficult "hacks"
- to achieve your goals.</para></listitem>
- <listitem><para>Leverage the device-specific
- options.</para></listitem>
- <listitem><para>Work in a separate layer so that you
- keep changes isolated.
- For information on how to create layers, see
- the "<link linkend='understanding-and-creating-layers'>Understanding and Creating Layers</link>" section.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='understand-what-gives-your-image-size'>
- <title>Understand What Contributes to Your Image Size</title>
-
- <para>
- It is easiest to have something to start with when creating
- your own distribution.
- You can use the Yocto Project out-of-the-box to create the
- <filename>poky-tiny</filename> distribution.
- Ultimately, you will want to make changes in your own
- distribution that are likely modeled after
- <filename>poky-tiny</filename>.
- <note>
- To use <filename>poky-tiny</filename> in your build,
- set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO'><filename>DISTRO</filename></ulink>
- variable in your
- <filename>local.conf</filename> file to "poky-tiny"
- as described in the
- "<link linkend='creating-your-own-distribution'>Creating Your Own Distribution</link>"
- section.
- </note>
- </para>
-
- <para>
- Understanding some memory concepts will help you reduce the
- system size.
- Memory consists of static, dynamic, and temporary memory.
- Static memory is the TEXT (code), DATA (initialized data
- in the code), and BSS (uninitialized data) sections.
- Dynamic memory represents memory that is allocated at runtime:
- stacks, hash tables, and so forth.
- Temporary memory is recovered after the boot process.
- This memory consists of memory used for decompressing
- the kernel and for the <filename>__init__</filename>
- functions.
- </para>
-
- <para>
- To help you see where you currently are with kernel and root
- filesystem sizes, you can use two tools found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink> in
- the <filename>scripts/tiny/</filename> directory:
- <itemizedlist>
- <listitem><para><filename>ksize.py</filename>: Reports
- component sizes for the kernel build objects.
- </para></listitem>
- <listitem><para><filename>dirsize.py</filename>: Reports
- component sizes for the root filesystem.</para></listitem>
- </itemizedlist>
- This next tool and command help you organize configuration
- fragments and view file dependencies in a human-readable form:
- <itemizedlist>
- <listitem><para><filename>merge_config.sh</filename>:
- Helps you manage configuration files and fragments
- within the kernel.
- With this tool, you can merge individual configuration
- fragments together.
- The tool allows you to make overrides and warns you
- of any missing configuration options.
- The tool is ideal for allowing you to iterate on
- configurations, create minimal configurations, and
- create configuration files for different machines
- without having to duplicate your process.</para>
- <para>The <filename>merge_config.sh</filename> script is
- part of the Linux Yocto kernel Git repositories
- (i.e. <filename>linux-yocto-3.14</filename>,
- <filename>linux-yocto-3.10</filename>,
- <filename>linux-yocto-3.8</filename>, and so forth)
- in the
- <filename>scripts/kconfig</filename> directory.</para>
- <para>For more information on configuration fragments,
- see the
- "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#creating-config-fragments'>Creating Configuration Fragments</ulink>"
- section in the Yocto Project Linux Kernel Development
- Manual.
- </para></listitem>
- <listitem><para><filename>bitbake -u taskexp -g <replaceable>bitbake_target</replaceable></filename>:
- Using the BitBake command with these options brings up
- a Dependency Explorer from which you can view file
- dependencies.
- Understanding these dependencies allows you to make
- informed decisions when cutting out various pieces of the
- kernel and root filesystem.</para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='trim-the-root-filesystem'>
- <title>Trim the Root Filesystem</title>
-
- <para>
- The root filesystem is made up of packages for booting,
- libraries, and applications.
- To change things, you can configure how the packaging happens,
- which changes the way you build them.
- You can also modify the filesystem itself or select a different
- filesystem.
- </para>
-
- <para>
- First, find out what is hogging your root filesystem by running the
- <filename>dirsize.py</filename> script from your root directory:
- <literallayout class='monospaced'>
- $ cd <replaceable>root-directory-of-image</replaceable>
- $ dirsize.py 100000 > dirsize-100k.log
- $ cat dirsize-100k.log
- </literallayout>
- You can apply a filter to the script to ignore files under
- a certain size.
- The previous example filters out any files below 100 Kbytes.
- The sizes reported by the tool are uncompressed, and thus
- will be smaller by a relatively constant factor in a
- compressed root filesystem.
- When you examine your log file, you can focus on areas of the
- root filesystem that take up large amounts of memory.
- </para>
-
- <para>
- You need to be sure that what you eliminate does not cripple
- the functionality you need.
- One way to see how packages relate to each other is by using
- the Dependency Explorer UI with the BitBake command:
- <literallayout class='monospaced'>
- $ cd <replaceable>image-directory</replaceable>
- $ bitbake -u taskexp -g <replaceable>image</replaceable>
- </literallayout>
- Use the interface to select potential packages you wish to
- eliminate and see their dependency relationships.
- </para>
-
- <para>
- When deciding how to reduce the size, get rid of packages that
- result in minimal impact on the feature set.
- For example, you might not need a VGA display.
- Or, you might be able to get by with <filename>devtmpfs</filename>
- and <filename>mdev</filename> instead of
- <filename>udev</filename>.
- </para>
-
- <para>
- Use your <filename>local.conf</filename> file to make changes.
- For example, to eliminate <filename>udev</filename> and
- <filename>glib</filename>, set the following in the
- local configuration file:
- <literallayout class='monospaced'>
- VIRTUAL-RUNTIME_dev_manager = ""
- </literallayout>
- </para>
-
- <para>
- Finally, you should consider exactly the type of root
- filesystem you need to meet your needs while also reducing
- its size.
- For example, consider <filename>cramfs</filename>,
- <filename>squashfs</filename>, <filename>ubifs</filename>,
- <filename>ext2</filename>, or an <filename>initramfs</filename>
- using <filename>initramfs</filename>.
- Be aware that <filename>ext3</filename> requires a 1 Mbyte
- journal.
- If you are okay with running read-only, you do not need this
- journal.
- </para>
-
- <note>
- After each round of elimination, you need to rebuild your
- system and then use the tools to see the effects of your
- reductions.
- </note>
- </section>
-
- <section id='trim-the-kernel'>
- <title>Trim the Kernel</title>
-
- <para>
- The kernel is built by including policies for hardware-independent
- aspects.
- What subsystems do you enable?
- For what architecture are you building?
- Which drivers do you build by default?
- <note>You can modify the kernel source if you want to help
- with boot time.
- </note>
- </para>
-
- <para>
- Run the <filename>ksize.py</filename> script from the top-level
- Linux build directory to get an idea of what is making up
- the kernel:
- <literallayout class='monospaced'>
- $ cd <replaceable>top-level-linux-build-directory</replaceable>
- $ ksize.py > ksize.log
- $ cat ksize.log
- </literallayout>
- When you examine the log, you will see how much space is
- taken up with the built-in <filename>.o</filename> files for
- drivers, networking, core kernel files, filesystem, sound,
- and so forth.
- The sizes reported by the tool are uncompressed, and thus
- will be smaller by a relatively constant factor in a compressed
- kernel image.
- Look to reduce the areas that are large and taking up around
- the "90% rule."
- </para>
-
- <para>
- To examine, or drill down, into any particular area, use the
- <filename>-d</filename> option with the script:
- <literallayout class='monospaced'>
- $ ksize.py -d > ksize.log
- </literallayout>
- Using this option breaks out the individual file information
- for each area of the kernel (e.g. drivers, networking, and
- so forth).
- </para>
-
- <para>
- Use your log file to see what you can eliminate from the kernel
- based on features you can let go.
- For example, if you are not going to need sound, you do not
- need any drivers that support sound.
- </para>
-
- <para>
- After figuring out what to eliminate, you need to reconfigure
- the kernel to reflect those changes during the next build.
- You could run <filename>menuconfig</filename> and make all your
- changes at once.
- However, that makes it difficult to see the effects of your
- individual eliminations and also makes it difficult to replicate
- the changes for perhaps another target device.
- A better method is to start with no configurations using
- <filename>allnoconfig</filename>, create configuration
- fragments for individual changes, and then manage the
- fragments into a single configuration file using
- <filename>merge_config.sh</filename>.
- The tool makes it easy for you to iterate using the
- configuration change and build cycle.
- </para>
-
- <para>
- Each time you make configuration changes, you need to rebuild
- the kernel and check to see what impact your changes had on
- the overall size.
- </para>
- </section>
-
- <section id='remove-package-management-requirements'>
- <title>Remove Package Management Requirements</title>
-
- <para>
- Packaging requirements add size to the image.
- One way to reduce the size of the image is to remove all the
- packaging requirements from the image.
- This reduction includes both removing the package manager
- and its unique dependencies as well as removing the package
- management data itself.
- </para>
-
- <para>
- To eliminate all the packaging requirements for an image,
- be sure that "package-management" is not part of your
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FEATURES'><filename>IMAGE_FEATURES</filename></ulink>
- statement for the image.
- When you remove this feature, you are removing the package
- manager as well as its dependencies from the root filesystem.
- </para>
- </section>
-
- <section id='look-for-other-ways-to-minimize-size'>
- <title>Look for Other Ways to Minimize Size</title>
-
- <para>
- Depending on your particular circumstances, other areas that you
- can trim likely exist.
- The key to finding these areas is through tools and methods
- described here combined with experimentation and iteration.
- Here are a couple of areas to experiment with:
- <itemizedlist>
- <listitem><para><filename>glibc</filename>:
- In general, follow this process:
- <orderedlist>
- <listitem><para>Remove <filename>glibc</filename>
- features from
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_FEATURES'><filename>DISTRO_FEATURES</filename></ulink>
- that you think you do not need.</para></listitem>
- <listitem><para>Build your distribution.
- </para></listitem>
- <listitem><para>If the build fails due to missing
- symbols in a package, determine if you can
- reconfigure the package to not need those
- features.
- For example, change the configuration to not
- support wide character support as is done for
- <filename>ncurses</filename>.
- Or, if support for those characters is needed,
- determine what <filename>glibc</filename>
- features provide the support and restore the
- configuration.
- </para></listitem>
- <listitem><para>Rebuild and repeat the process.
- </para></listitem>
- </orderedlist></para></listitem>
- <listitem><para><filename>busybox</filename>:
- For BusyBox, use a process similar as described for
- <filename>glibc</filename>.
- A difference is you will need to boot the resulting
- system to see if you are able to do everything you
- expect from the running system.
- You need to be sure to integrate configuration fragments
- into Busybox because BusyBox handles its own core
- features and then allows you to add configuration
- fragments on top.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='iterate-on-the-process'>
- <title>Iterate on the Process</title>
-
- <para>
- If you have not reached your goals on system size, you need
- to iterate on the process.
- The process is the same.
- Use the tools and see just what is taking up 90% of the root
- filesystem and the kernel.
- Decide what you can eliminate without limiting your device
- beyond what you need.
- </para>
-
- <para>
- Depending on your system, a good place to look might be
- Busybox, which provides a stripped down
- version of Unix tools in a single, executable file.
- You might be able to drop virtual terminal services or perhaps
- ipv6.
- </para>
- </section>
- </section>
-
- <section id='building-images-for-more-than-one-machine'>
- <title>Building Images for More than One Machine</title>
-
- <para>
- A common scenario developers face is creating images for several
- different machines that use the same software environment.
- In this situation, it is tempting to set the
- tunings and optimization flags for each build specifically for
- the targeted hardware (i.e. "maxing out" the tunings).
- Doing so can considerably add to build times and package feed
- maintenance collectively for the machines.
- For example, selecting tunes that are extremely specific to a
- CPU core used in a system might enable some micro optimizations
- in GCC for that particular system but would otherwise not gain
- you much of a performance difference across the other systems
- as compared to using a more general tuning across all the builds
- (e.g. setting
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEFAULTTUNE'><filename>DEFAULTTUNE</filename></ulink>
- specifically for each machine's build).
- Rather than "max out" each build's tunings, you can take steps that
- cause the OpenEmbedded build system to reuse software across the
- various machines where it makes sense.
- </para>
-
- <para>
- If build speed and package feed maintenance are considerations,
- you should consider the points in this section that can help you
- optimize your tunings to best consider build times and package
- feed maintenance.
- <itemizedlist>
- <listitem><para>
- <emphasis>Share the Build Directory:</emphasis>
- If at all possible, share the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>
- across builds.
- The Yocto Project supports switching between different
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
- values in the same <filename>TMPDIR</filename>.
- This practice is well supported and regularly used by
- developers when building for multiple machines.
- When you use the same <filename>TMPDIR</filename> for
- multiple machine builds, the OpenEmbedded build system can
- reuse the existing native and often cross-recipes for
- multiple machines.
- Thus, build time decreases.
- <note>
- If
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO'><filename>DISTRO</filename></ulink>
- settings change or fundamental configuration settings
- such as the filesystem layout, you need to work with
- a clean <filename>TMPDIR</filename>.
- Sharing <filename>TMPDIR</filename> under these
- circumstances might work but since it is not
- guaranteed, you should use a clean
- <filename>TMPDIR</filename>.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Enable the Appropriate Package Architecture:</emphasis>
- By default, the OpenEmbedded build system enables three
- levels of package architectures: "all", "tune" or "package",
- and "machine".
- Any given recipe usually selects one of these package
- architectures (types) for its output.
- Depending for what a given recipe creates packages, making
- sure you enable the appropriate package architecture can
- directly impact the build time.</para>
-
- <para>A recipe that just generates scripts can enable
- "all" architecture because there are no binaries to build.
- To specifically enable "all" architecture, be sure your
- recipe inherits the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-allarch'><filename>allarch</filename></ulink>
- class.
- This class is useful for "all" architectures because it
- configures many variables so packages can be used across
- multiple architectures.</para>
-
- <para>If your recipe needs to generate packages that are
- machine-specific or when one of the build or runtime
- dependencies is already machine-architecture dependent,
- which makes your recipe also machine-architecture dependent,
- make sure your recipe enables the "machine" package
- architecture through the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE_ARCH'><filename>MACHINE_ARCH</filename></ulink>
- variable:
- <literallayout class='monospaced'>
- PACKAGE_ARCH = "${MACHINE_ARCH}"
- </literallayout>
- When you do not specifically enable a package
- architecture through the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_ARCH'><filename>PACKAGE_ARCH</filename></ulink>,
- The OpenEmbedded build system defaults to the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TUNE_PKGARCH'><filename>TUNE_PKGARCH</filename></ulink>
- setting:
- <literallayout class='monospaced'>
- PACKAGE_ARCH = "${TUNE_PKGARCH}"
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Choose a Generic Tuning File if Possible:</emphasis>
- Some tunes are more generic and can run on multiple targets
- (e.g. an <filename>armv5</filename> set of packages could
- run on <filename>armv6</filename> and
- <filename>armv7</filename> processors in most cases).
- Similarly, <filename>i486</filename> binaries could work
- on <filename>i586</filename> and higher processors.
- You should realize, however, that advances on newer
- processor versions would not be used.</para>
-
- <para>If you select the same tune for several different
- machines, the OpenEmbedded build system reuses software
- previously built, thus speeding up the overall build time.
- Realize that even though a new sysroot for each machine is
- generated, the software is not recompiled and only one
- package feed exists.
- </para></listitem>
- <listitem><para>
- <emphasis>Manage Granular Level Packaging:</emphasis>
- Sometimes cases exist where injecting another level of
- package architecture beyond the three higher levels noted
- earlier can be useful.
- For example, consider how NXP (formerly Freescale) allows
- for the easy reuse of binary packages in their layer
- <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/meta-freescale/'><filename>meta-freescale</filename></ulink>.
- In this example, the
- <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/meta-freescale/tree/classes/fsl-dynamic-packagearch.bbclass'><filename>fsl-dynamic-packagearch</filename></ulink>
- class shares GPU packages for i.MX53 boards because
- all boards share the AMD GPU.
- The i.MX6-based boards can do the same because all boards
- share the Vivante GPU.
- This class inspects the BitBake datastore to identify if
- the package provides or depends on one of the
- sub-architecture values.
- If so, the class sets the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_ARCH'><filename>PACKAGE_ARCH</filename></ulink>
- value based on the <filename>MACHINE_SUBARCH</filename>
- value.
- If the package does not provide or depend on one of the
- sub-architecture values but it matches a value in the
- machine-specific filter, it sets
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE_ARCH'><filename>MACHINE_ARCH</filename></ulink>.
- This behavior reduces the number of packages built and
- saves build time by reusing binaries.
- </para></listitem>
- <listitem><para>
- <emphasis>Use Tools to Debug Issues:</emphasis>
- Sometimes you can run into situations where software is
- being rebuilt when you think it should not be.
- For example, the OpenEmbedded build system might not be
- using shared state between machines when you think it
- should be.
- These types of situations are usually due to references
- to machine-specific variables such as
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SERIAL_CONSOLES'><filename>SERIAL_CONSOLES</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-XSERVER'><filename>XSERVER</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE_FEATURES'><filename>MACHINE_FEATURES</filename></ulink>,
- and so forth in code that is supposed to only be
- tune-specific or when the recipe depends
- (<ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-RDEPENDS'><filename>RDEPENDS</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-RRECOMMENDS'><filename>RRECOMMENDS</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-RSUGGESTS'><filename>RSUGGESTS</filename></ulink>,
- and so forth) on some other recipe that already has
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_ARCH'><filename>PACKAGE_ARCH</filename></ulink>
- defined as "${MACHINE_ARCH}".
- <note>
- Patches to fix any issues identified are most welcome
- as these issues occasionally do occur.
- </note></para>
-
- <para>For such cases, you can use some tools to help you
- sort out the situation:
- <itemizedlist>
- <listitem><para>
- <emphasis><filename>sstate-diff-machines.sh</filename>:</emphasis>
- You can find this tool in the
- <filename>scripts</filename> directory of the
- Source Repositories.
- See the comments in the script for information on
- how to use the tool.
- </para></listitem>
- <listitem><para>
- <emphasis>BitBake's "-S printdiff" Option:</emphasis>
- Using this option causes BitBake to try to
- establish the closest signature match it can
- (e.g. in the shared state cache) and then run
- <filename>bitbake-diffsigs</filename> over the
- matches to determine the stamps and delta where
- these two stamp trees diverge.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id="building-software-from-an-external-source">
- <title>Building Software from an External Source</title>
-
- <para>
- By default, the OpenEmbedded build system uses the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
- when building source code.
- The build process involves fetching the source files, unpacking
- them, and then patching them if necessary before the build takes
- place.
- </para>
-
- <para>
- Situations exist where you might want to build software from source
- files that are external to and thus outside of the
- OpenEmbedded build system.
- For example, suppose you have a project that includes a new BSP with
- a heavily customized kernel.
- And, you want to minimize exposing the build system to the
- development team so that they can focus on their project and
- maintain everyone's workflow as much as possible.
- In this case, you want a kernel source directory on the development
- machine where the development occurs.
- You want the recipe's
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
- variable to point to the external directory and use it as is, not
- copy it.
- </para>
-
- <para>
- To build from software that comes from an external source, all you
- need to do is inherit the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-externalsrc'><filename>externalsrc</filename></ulink>
- class and then set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTERNALSRC'><filename>EXTERNALSRC</filename></ulink>
- variable to point to your external source code.
- Here are the statements to put in your
- <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- INHERIT += "externalsrc"
- EXTERNALSRC_pn-<replaceable>myrecipe</replaceable> = "<replaceable>path-to-your-source-tree</replaceable>"
- </literallayout>
- </para>
-
- <para>
- This next example shows how to accomplish the same thing by setting
- <filename>EXTERNALSRC</filename> in the recipe itself or in the
- recipe's append file:
- <literallayout class='monospaced'>
- EXTERNALSRC = "<replaceable>path</replaceable>"
- EXTERNALSRC_BUILD = "<replaceable>path</replaceable>"
- </literallayout>
- <note>
- In order for these settings to take effect, you must globally
- or locally inherit the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-externalsrc'><filename>externalsrc</filename></ulink>
- class.
- </note>
- </para>
-
- <para>
- By default, <filename>externalsrc.bbclass</filename> builds
- the source code in a directory separate from the external source
- directory as specified by
- <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTERNALSRC'><filename>EXTERNALSRC</filename></ulink>.
- If you need to have the source built in the same directory in
- which it resides, or some other nominated directory, you can set
- <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTERNALSRC_BUILD'><filename>EXTERNALSRC_BUILD</filename></ulink>
- to point to that directory:
- <literallayout class='monospaced'>
- EXTERNALSRC_BUILD_pn-<replaceable>myrecipe</replaceable> = "<replaceable>path-to-your-source-tree</replaceable>"
- </literallayout>
- </para>
- </section>
-
- <section id="replicating-a-build-offline">
- <title>Replicating a Build Offline</title>
-
- <para>
- It can be useful to take a "snapshot" of upstream sources
- used in a build and then use that "snapshot" later to
- replicate the build offline.
- To do so, you need to first prepare and populate your downloads
- directory your "snapshot" of files.
- Once your downloads directory is ready, you can use it at
- any time and from any machine to replicate your build.
- </para>
-
- <para>
- Follow these steps to populate your Downloads directory:
- <orderedlist>
- <listitem><para>
- <emphasis>Create a Clean Downloads Directory:</emphasis>
- Start with an empty downloads directory
- (<ulink url='&YOCTO_DOCS_REF_URL;#var-DL_DIR'><filename>DL_DIR</filename></ulink>).
- You start with an empty downloads directory by either
- removing the files in the existing directory or by
- setting
- <filename>DL_DIR</filename> to point to either an
- empty location or one that does not yet exist.
- </para></listitem>
- <listitem><para>
- <emphasis>Generate Tarballs of the Source Git Repositories:</emphasis>
- Edit your <filename>local.conf</filename> configuration
- file as follows:
- <literallayout class='monospaced'>
- DL_DIR = "/home/<replaceable>your-download-dir</replaceable>/"
- BB_GENERATE_MIRROR_TARBALLS = "1"
- </literallayout>
- During the fetch process in the next step, BitBake
- gathers the source files and creates tarballs in
- the directory pointed to by <filename>DL_DIR</filename>.
- See the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BB_GENERATE_MIRROR_TARBALLS'><filename>BB_GENERATE_MIRROR_TARBALLS</filename></ulink>
- variable for more information.
- </para></listitem>
- <listitem><para>
- <emphasis>Populate Your Downloads Directory Without Building:</emphasis>
- Use BitBake to fetch your sources but inhibit the
- build:
- <literallayout class='monospaced'>
- $ bitbake <replaceable>target</replaceable> --runonly=fetch
- </literallayout>
- The downloads directory (i.e.
- <filename>${DL_DIR}</filename>) now has a "snapshot" of
- the source files in the form of tarballs, which can
- be used for the build.
- </para></listitem>
- <listitem><para>
- <emphasis>Optionally Remove Any Git or other SCM Subdirectories From the Downloads Directory:</emphasis>
- If you want, you can clean up your downloads directory
- by removing any Git or other Source Control Management
- (SCM) subdirectories such as
- <filename>${DL_DIR}/git2/*</filename>.
- The tarballs already contain these subdirectories.
- </para></listitem>
- </orderedlist>
- </para>
-
- <para>
- Once your downloads directory has everything it needs regarding
- source files, you can create your "own-mirror" and build
- your target.
- Understand that you can use the files to build the target
- offline from any machine and at any time.
- </para>
-
- <para>
- Follow these steps to build your target using the files in the
- downloads directory:
- <orderedlist>
- <listitem><para>
- <emphasis>Using Local Files Only:</emphasis>
- Inside your <filename>local.conf</filename> file, add
- the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SOURCE_MIRROR_URL'><filename>SOURCE_MIRROR_URL</filename></ulink>
- variable,
- inherit the <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-own-mirrors'><filename>own-mirrors</filename></ulink>
- class, and use the
- <ulink url='&YOCTO_DOCS_BB_URL;#var-bb-BB_NO_NETWORK'><filename>BB_NO_NETWORK</filename></ulink>
- variable to your <filename>local.conf</filename>.
- <literallayout class='monospaced'>
- SOURCE_MIRROR_URL ?= "file:///home/<replaceable>your-download-dir</replaceable>/"
- INHERIT += "own-mirrors"
- BB_NO_NETWORK = "1"
- </literallayout>
- The <filename>SOURCE_MIRROR_URL</filename> and
- <filename>own-mirror</filename> class set up the system
- to use the downloads directory as your "own mirror".
- Using the <filename>BB_NO_NETWORK</filename>
- variable makes sure that BitBake's fetching process
- in step 3 stays local, which means files from
- your "own-mirror" are used.
- </para></listitem>
- <listitem><para>
- <emphasis>Start With a Clean Build:</emphasis>
- You can start with a clean build by removing the
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink><filename>}</filename>
- directory or using a new
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- </para></listitem>
- <listitem><para>
- <emphasis>Build Your Target:</emphasis>
- Use BitBake to build your target:
- <literallayout class='monospaced'>
- $ bitbake <replaceable>target</replaceable>
- </literallayout>
- The build completes using the known local "snapshot" of
- source files from your mirror.
- The resulting tarballs for your "snapshot" of source
- files are in the downloads directory.
- <note>
- <para>The offline build does not work if recipes
- attempt to find the latest version of software
- by setting
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCREV'><filename>SRCREV</filename></ulink>
- to
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-AUTOREV'><filename>AUTOREV</filename></ulink><filename>}</filename>:
- <literallayout class='monospaced'>
- SRCREV = "${AUTOREV}"
- </literallayout>
- When a recipe sets
- <filename>SRCREV</filename> to
- <filename>${AUTOREV}</filename>, the build system
- accesses the network in an attempt to determine the
- latest version of software from the SCM.
- Typically, recipes that use
- <filename>AUTOREV</filename> are custom or
- modified recipes.
- Recipes that reside in public repositories
- usually do not use <filename>AUTOREV</filename>.
- </para>
-
- <para>If you do have recipes that use
- <filename>AUTOREV</filename>, you can take steps to
- still use the recipes in an offline build.
- Do the following:
- <orderedlist>
- <listitem><para>
- Use a configuration generated by
- enabling
- <link linkend='maintaining-build-output-quality'>build history</link>.
- </para></listitem>
- <listitem><para>
- Use the
- <filename>buildhistory-collect-srcrevs</filename>
- command to collect the stored
- <filename>SRCREV</filename> values from
- the build's history.
- For more information on collecting these
- values, see the
- "<link linkend='build-history-package-information'>Build History Package Information</link>"
- section.
- </para></listitem>
- <listitem><para>
- Once you have the correct source
- revisions, you can modify those recipes
- to to set <filename>SRCREV</filename>
- to specific versions of the software.
- </para></listitem>
- </orderedlist>
- </para>
- </note>
- </para></listitem>
- </orderedlist>
- </para>
- </section>
- </section>
-
- <section id='speeding-up-a-build'>
- <title>Speeding Up a Build</title>
-
- <para>
- Build time can be an issue.
- By default, the build system uses simple controls to try and maximize
- build efficiency.
- In general, the default settings for all the following variables
- result in the most efficient build times when dealing with single
- socket systems (i.e. a single CPU).
- If you have multiple CPUs, you might try increasing the default
- values to gain more speed.
- See the descriptions in the glossary for each variable for more
- information:
- <itemizedlist>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BB_NUMBER_THREADS'><filename>BB_NUMBER_THREADS</filename>:</ulink>
- The maximum number of threads BitBake simultaneously executes.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_NUMBER_PARSE_THREADS'><filename>BB_NUMBER_PARSE_THREADS</filename>:</ulink>
- The number of threads BitBake uses during parsing.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PARALLEL_MAKE'><filename>PARALLEL_MAKE</filename>:</ulink>
- Extra options passed to the <filename>make</filename> command
- during the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-compile'><filename>do_compile</filename></ulink>
- task in order to specify parallel compilation on the
- local build host.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PARALLEL_MAKEINST'><filename>PARALLEL_MAKEINST</filename>:</ulink>
- Extra options passed to the <filename>make</filename> command
- during the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-install'><filename>do_install</filename></ulink>
- task in order to specify parallel installation on the
- local build host.
- </para></listitem>
- </itemizedlist>
- As mentioned, these variables all scale to the number of processor
- cores available on the build system.
- For single socket systems, this auto-scaling ensures that the build
- system fundamentally takes advantage of potential parallel operations
- during the build based on the build machine's capabilities.
- </para>
-
- <para>
- Following are additional factors that can affect build speed:
- <itemizedlist>
- <listitem><para>
- File system type:
- The file system type that the build is being performed on can
- also influence performance.
- Using <filename>ext4</filename> is recommended as compared
- to <filename>ext2</filename> and <filename>ext3</filename>
- due to <filename>ext4</filename> improved features
- such as extents.
- </para></listitem>
- <listitem><para>
- Disabling the updating of access time using
- <filename>noatime</filename>:
- The <filename>noatime</filename> mount option prevents the
- build system from updating file and directory access times.
- </para></listitem>
- <listitem><para>
- Setting a longer commit:
- Using the "commit=" mount option increases the interval
- in seconds between disk cache writes.
- Changing this interval from the five second default to
- something longer increases the risk of data loss but decreases
- the need to write to the disk, thus increasing the build
- performance.
- </para></listitem>
- <listitem><para>
- Choosing the packaging backend:
- Of the available packaging backends, IPK is the fastest.
- Additionally, selecting a singular packaging backend also
- helps.
- </para></listitem>
- <listitem><para>
- Using <filename>tmpfs</filename> for
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>
- as a temporary file system:
- While this can help speed up the build, the benefits are
- limited due to the compiler using
- <filename>-pipe</filename>.
- The build system goes to some lengths to avoid
- <filename>sync()</filename> calls into the
- file system on the principle that if there was a significant
- failure, the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
- contents could easily be rebuilt.
- </para></listitem>
- <listitem><para>
- Inheriting the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-rm-work'><filename>rm_work</filename></ulink>
- class:
- Inheriting this class has shown to speed up builds due to
- significantly lower amounts of data stored in the data
- cache as well as on disk.
- Inheriting this class also makes cleanup of
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>
- faster, at the expense of being easily able to dive into the
- source code.
- File system maintainers have recommended that the fastest way
- to clean up large numbers of files is to reformat partitions
- rather than delete files due to the linear nature of
- partitions.
- This, of course, assumes you structure the disk partitions and
- file systems in a way that this is practical.
- </para></listitem>
- </itemizedlist>
- Aside from the previous list, you should keep some trade offs in
- mind that can help you speed up the build:
- <itemizedlist>
- <listitem><para>
- Remove items from
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_FEATURES'><filename>DISTRO_FEATURES</filename></ulink>
- that you might not need.
- </para></listitem>
- <listitem><para>
- Exclude debug symbols and other debug information:
- If you do not need these symbols and other debug information,
- disabling the <filename>*-dbg</filename> package generation
- can speed up the build.
- You can disable this generation by setting the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-INHIBIT_PACKAGE_DEBUG_SPLIT'><filename>INHIBIT_PACKAGE_DEBUG_SPLIT</filename></ulink>
- variable to "1".
- </para></listitem>
- <listitem><para>
- Disable static library generation for recipes derived from
- <filename>autoconf</filename> or <filename>libtool</filename>:
- Following is an example showing how to disable static
- libraries and still provide an override to handle exceptions:
- <literallayout class='monospaced'>
- STATICLIBCONF = "--disable-static"
- STATICLIBCONF_sqlite3-native = ""
- EXTRA_OECONF += "${STATICLIBCONF}"
- </literallayout>
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- Some recipes need static libraries in order to work
- correctly (e.g. <filename>pseudo-native</filename>
- needs <filename>sqlite3-native</filename>).
- Overrides, as in the previous example, account for
- these kinds of exceptions.
- </para></listitem>
- <listitem><para>
- Some packages have packaging code that assumes the
- presence of the static libraries.
- If so, you might need to exclude them as well.
- </para></listitem>
- </itemizedlist>
- </note>
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id="platdev-working-with-libraries">
- <title>Working With Libraries</title>
-
- <para>
- Libraries are an integral part of your system.
- This section describes some common practices you might find
- helpful when working with libraries to build your system:
- <itemizedlist>
- <listitem><para><link linkend='including-static-library-files'>How to include static library files</link>
- </para></listitem>
- <listitem><para><link linkend='combining-multiple-versions-library-files-into-one-image'>How to use the Multilib feature to combine multiple versions of library files into a single image</link>
- </para></listitem>
- <listitem><para><link linkend='installing-multiple-versions-of-the-same-library'>How to install multiple versions of the same library in parallel on the same system</link>
- </para></listitem>
- </itemizedlist>
- </para>
-
- <section id='including-static-library-files'>
- <title>Including Static Library Files</title>
-
- <para>
- If you are building a library and the library offers static linking, you can control
- which static library files (<filename>*.a</filename> files) get included in the
- built library.
- </para>
-
- <para>
- The <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGES'><filename>PACKAGES</filename></ulink>
- and <ulink url='&YOCTO_DOCS_REF_URL;#var-FILES'><filename>FILES_*</filename></ulink>
- variables in the
- <filename>meta/conf/bitbake.conf</filename> configuration file define how files installed
- by the <filename>do_install</filename> task are packaged.
- By default, the <filename>PACKAGES</filename> variable includes
- <filename>${PN}-staticdev</filename>, which represents all static library files.
- <note>
- Some previously released versions of the Yocto Project
- defined the static library files through
- <filename>${PN}-dev</filename>.
- </note>
- Following is part of the BitBake configuration file, where
- you can see how the static library files are defined:
- <literallayout class='monospaced'>
- PACKAGE_BEFORE_PN ?= ""
- PACKAGES = "${PN}-dbg ${PN}-staticdev ${PN}-dev ${PN}-doc ${PN}-locale ${PACKAGE_BEFORE_PN} ${PN}"
- PACKAGES_DYNAMIC = "^${PN}-locale-.*"
- FILES = ""
-
- FILES_${PN} = "${bindir}/* ${sbindir}/* ${libexecdir}/* ${libdir}/lib*${SOLIBS} \
- ${sysconfdir} ${sharedstatedir} ${localstatedir} \
- ${base_bindir}/* ${base_sbindir}/* \
- ${base_libdir}/*${SOLIBS} \
- ${base_prefix}/lib/udev/rules.d ${prefix}/lib/udev/rules.d \
- ${datadir}/${BPN} ${libdir}/${BPN}/* \
- ${datadir}/pixmaps ${datadir}/applications \
- ${datadir}/idl ${datadir}/omf ${datadir}/sounds \
- ${libdir}/bonobo/servers"
-
- FILES_${PN}-bin = "${bindir}/* ${sbindir}/*"
-
- FILES_${PN}-doc = "${docdir} ${mandir} ${infodir} ${datadir}/gtk-doc \
- ${datadir}/gnome/help"
- SECTION_${PN}-doc = "doc"
-
- FILES_SOLIBSDEV ?= "${base_libdir}/lib*${SOLIBSDEV} ${libdir}/lib*${SOLIBSDEV}"
- FILES_${PN}-dev = "${includedir} ${FILES_SOLIBSDEV} ${libdir}/*.la \
- ${libdir}/*.o ${libdir}/pkgconfig ${datadir}/pkgconfig \
- ${datadir}/aclocal ${base_libdir}/*.o \
- ${libdir}/${BPN}/*.la ${base_libdir}/*.la"
- SECTION_${PN}-dev = "devel"
- ALLOW_EMPTY_${PN}-dev = "1"
- RDEPENDS_${PN}-dev = "${PN} (= ${EXTENDPKGV})"
-
- FILES_${PN}-staticdev = "${libdir}/*.a ${base_libdir}/*.a ${libdir}/${BPN}/*.a"
- SECTION_${PN}-staticdev = "devel"
- RDEPENDS_${PN}-staticdev = "${PN}-dev (= ${EXTENDPKGV})"
- </literallayout>
- </para>
- </section>
-
- <section id="combining-multiple-versions-library-files-into-one-image">
- <title>Combining Multiple Versions of Library Files into One Image</title>
-
- <para>
- The build system offers the ability to build libraries with different
- target optimizations or architecture formats and combine these together
- into one system image.
- You can link different binaries in the image
- against the different libraries as needed for specific use cases.
- This feature is called "Multilib."
- </para>
-
- <para>
- An example would be where you have most of a system compiled in 32-bit
- mode using 32-bit libraries, but you have something large, like a database
- engine, that needs to be a 64-bit application and uses 64-bit libraries.
- Multilib allows you to get the best of both 32-bit and 64-bit libraries.
- </para>
-
- <para>
- While the Multilib feature is most commonly used for 32 and 64-bit differences,
- the approach the build system uses facilitates different target optimizations.
- You could compile some binaries to use one set of libraries and other binaries
- to use a different set of libraries.
- The libraries could differ in architecture, compiler options, or other
- optimizations.
- </para>
-
- <para>
- Several examples exist in the
- <filename>meta-skeleton</filename> layer found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>:
- <itemizedlist>
- <listitem><para><filename>conf/multilib-example.conf</filename>
- configuration file</para></listitem>
- <listitem><para><filename>conf/multilib-example2.conf</filename>
- configuration file</para></listitem>
- <listitem><para><filename>recipes-multilib/images/core-image-multilib-example.bb</filename>
- recipe</para></listitem>
- </itemizedlist>
- </para>
-
- <section id='preparing-to-use-multilib'>
- <title>Preparing to Use Multilib</title>
-
- <para>
- User-specific requirements drive the Multilib feature.
- Consequently, there is no one "out-of-the-box" configuration that likely
- exists to meet your needs.
- </para>
-
- <para>
- In order to enable Multilib, you first need to ensure your recipe is
- extended to support multiple libraries.
- Many standard recipes are already extended and support multiple libraries.
- You can check in the <filename>meta/conf/multilib.conf</filename>
- configuration file in the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink> to see how this is
- done using the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBCLASSEXTEND'><filename>BBCLASSEXTEND</filename></ulink>
- variable.
- Eventually, all recipes will be covered and this list will
- not be needed.
- </para>
-
- <para>
- For the most part, the Multilib class extension works automatically to
- extend the package name from <filename>${PN}</filename> to
- <filename>${MLPREFIX}${PN}</filename>, where <filename>MLPREFIX</filename>
- is the particular multilib (e.g. "lib32-" or "lib64-").
- Standard variables such as
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-RDEPENDS'><filename>RDEPENDS</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-RPROVIDES'><filename>RPROVIDES</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-RRECOMMENDS'><filename>RRECOMMENDS</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGES'><filename>PACKAGES</filename></ulink>, and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGES_DYNAMIC'><filename>PACKAGES_DYNAMIC</filename></ulink>
- are automatically extended by the system.
- If you are extending any manual code in the recipe, you can use the
- <filename>${MLPREFIX}</filename> variable to ensure those names are extended
- correctly.
- This automatic extension code resides in <filename>multilib.bbclass</filename>.
- </para>
- </section>
-
- <section id='using-multilib'>
- <title>Using Multilib</title>
-
- <para>
- After you have set up the recipes, you need to define the actual
- combination of multiple libraries you want to build.
- You accomplish this through your <filename>local.conf</filename>
- configuration file in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- An example configuration would be as follows:
- <literallayout class='monospaced'>
- MACHINE = "qemux86-64"
- require conf/multilib.conf
- MULTILIBS = "multilib:lib32"
- DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
- IMAGE_INSTALL_append = " lib32-glib-2.0"
- </literallayout>
- This example enables an
- additional library named <filename>lib32</filename> alongside the
- normal target packages.
- When combining these "lib32" alternatives, the example uses "x86" for tuning.
- For information on this particular tuning, see
- <filename>meta/conf/machine/include/ia32/arch-ia32.inc</filename>.
- </para>
-
- <para>
- The example then includes <filename>lib32-glib-2.0</filename>
- in all the images, which illustrates one method of including a
- multiple library dependency.
- You can use a normal image build to include this dependency,
- for example:
- <literallayout class='monospaced'>
- $ bitbake core-image-sato
- </literallayout>
- You can also build Multilib packages specifically with a command like this:
- <literallayout class='monospaced'>
- $ bitbake lib32-glib-2.0
- </literallayout>
- </para>
- </section>
-
- <section id='additional-implementation-details'>
- <title>Additional Implementation Details</title>
-
- <para>
- Generic implementation details as well as details that are
- specific to package management systems exist.
- Following are implementation details that exist regardless
- of the package management system:
- <itemizedlist>
- <listitem><para>The typical convention used for the
- class extension code as used by
- Multilib assumes that all package names specified
- in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGES'><filename>PACKAGES</filename></ulink>
- that contain <filename>${PN}</filename> have
- <filename>${PN}</filename> at the start of the name.
- When that convention is not followed and
- <filename>${PN}</filename> appears at
- the middle or the end of a name, problems occur.
- </para></listitem>
- <listitem><para>The
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TARGET_VENDOR'><filename>TARGET_VENDOR</filename></ulink>
- value under Multilib will be extended to
- "-<replaceable>vendor</replaceable>ml<replaceable>multilib</replaceable>"
- (e.g. "-pokymllib32" for a "lib32" Multilib with
- Poky).
- The reason for this slightly unwieldy contraction
- is that any "-" characters in the vendor
- string presently break Autoconf's
- <filename>config.sub</filename>, and
- other separators are problematic for different
- reasons.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- For the RPM Package Management System, the following implementation details
- exist:
- <itemizedlist>
- <listitem><para>A unique architecture is defined for the Multilib packages,
- along with creating a unique deploy folder under
- <filename>tmp/deploy/rpm</filename> in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- For example, consider <filename>lib32</filename> in a
- <filename>qemux86-64</filename> image.
- The possible architectures in the system are "all", "qemux86_64",
- "lib32_qemux86_64", and "lib32_x86".</para></listitem>
- <listitem><para>The <filename>${MLPREFIX}</filename> variable is stripped from
- <filename>${PN}</filename> during RPM packaging.
- The naming for a normal RPM package and a Multilib RPM package in a
- <filename>qemux86-64</filename> system resolves to something similar to
- <filename>bash-4.1-r2.x86_64.rpm</filename> and
- <filename>bash-4.1.r2.lib32_x86.rpm</filename>, respectively.
- </para></listitem>
- <listitem><para>When installing a Multilib image, the RPM backend first
- installs the base image and then installs the Multilib libraries.
- </para></listitem>
- <listitem><para>The build system relies on RPM to resolve the identical files in the
- two (or more) Multilib packages.</para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- For the IPK Package Management System, the following implementation details exist:
- <itemizedlist>
- <listitem><para>The <filename>${MLPREFIX}</filename> is not stripped from
- <filename>${PN}</filename> during IPK packaging.
- The naming for a normal RPM package and a Multilib IPK package in a
- <filename>qemux86-64</filename> system resolves to something like
- <filename>bash_4.1-r2.x86_64.ipk</filename> and
- <filename>lib32-bash_4.1-rw_x86.ipk</filename>, respectively.
- </para></listitem>
- <listitem><para>The IPK deploy folder is not modified with
- <filename>${MLPREFIX}</filename> because packages with and without
- the Multilib feature can exist in the same folder due to the
- <filename>${PN}</filename> differences.</para></listitem>
- <listitem><para>IPK defines a sanity check for Multilib installation
- using certain rules for file comparison, overridden, etc.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
- </section>
-
- <section id='installing-multiple-versions-of-the-same-library'>
- <title>Installing Multiple Versions of the Same Library</title>
-
- <para>
- Situations can exist where you need to install and use
- multiple versions of the same library on the same system
- at the same time.
- These situations almost always exist when a library API
- changes and you have multiple pieces of software that
- depend on the separate versions of the library.
- To accommodate these situations, you can install multiple
- versions of the same library in parallel on the same system.
- </para>
-
- <para>
- The process is straightforward as long as the libraries use
- proper versioning.
- With properly versioned libraries, all you need to do to
- individually specify the libraries is create separate,
- appropriately named recipes where the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink> part of the
- name includes a portion that differentiates each library version
- (e.g.the major part of the version number).
- Thus, instead of having a single recipe that loads one version
- of a library (e.g. <filename>clutter</filename>), you provide
- multiple recipes that result in different versions
- of the libraries you want.
- As an example, the following two recipes would allow the
- two separate versions of the <filename>clutter</filename>
- library to co-exist on the same system:
- <literallayout class='monospaced'>
- clutter-1.6_1.6.20.bb
- clutter-1.8_1.8.4.bb
- </literallayout>
- Additionally, if you have other recipes that depend on a given
- library, you need to use the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
- variable to create the dependency.
- Continuing with the same example, if you want to have a recipe
- depend on the 1.8 version of the <filename>clutter</filename>
- library, use the following in your recipe:
- <literallayout class='monospaced'>
- DEPENDS = "clutter-1.8"
- </literallayout>
- </para>
- </section>
- </section>
-
- <section id='using-x32-psabi'>
- <title>Using x32 psABI</title>
-
- <para>
- x32 processor-specific Application Binary Interface
- (<ulink url='https://software.intel.com/en-us/node/628948'>x32 psABI</ulink>)
- is a native 32-bit processor-specific ABI for
- <trademark class='registered'>Intel</trademark> 64 (x86-64)
- architectures.
- An ABI defines the calling conventions between functions in a
- processing environment.
- The interface determines what registers are used and what the
- sizes are for various C data types.
- </para>
-
- <para>
- Some processing environments prefer using 32-bit applications even
- when running on Intel 64-bit platforms.
- Consider the i386 psABI, which is a very old 32-bit ABI for Intel
- 64-bit platforms.
- The i386 psABI does not provide efficient use and access of the
- Intel 64-bit processor resources, leaving the system underutilized.
- Now consider the x86_64 psABI.
- This ABI is newer and uses 64-bits for data sizes and program
- pointers.
- The extra bits increase the footprint size of the programs,
- libraries, and also increases the memory and file system size
- requirements.
- Executing under the x32 psABI enables user programs to utilize CPU
- and system resources more efficiently while keeping the memory
- footprint of the applications low.
- Extra bits are used for registers but not for addressing mechanisms.
- </para>
-
- <para>
- The Yocto Project supports the final specifications of x32 psABI
- as follows:
- <itemizedlist>
- <listitem><para>
- You can create packages and images in x32 psABI format on
- x86_64 architecture targets.
- </para></listitem>
- <listitem><para>
- You can successfully build recipes with the x32 toolchain.
- </para></listitem>
- <listitem><para>
- You can create and boot
- <filename>core-image-minimal</filename> and
- <filename>core-image-sato</filename> images.
- </para></listitem>
- <listitem><para>
- RPM Package Manager (RPM) support exists for x32 binaries.
- </para></listitem>
- <listitem><para>
- Support for large images exists.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- To use the x32 psABI, you need to edit your
- <filename>conf/local.conf</filename> configuration file as
- follows:
- <literallayout class='monospaced'>
- MACHINE = "qemux86-64"
- DEFAULTTUNE = "x86-64-x32"
- baselib = "${@d.getVar('BASE_LIB_tune-' + (d.getVar('DEFAULTTUNE') \
- or 'INVALID')) or 'lib'}"
- </literallayout>
- Once you have set up your configuration file, use BitBake to
- build an image that supports the x32 psABI.
- Here is an example:
- <literallayout class='monospaced'>
- $ bitbake core-image-sato
- </literallayout>
- </para>
- </section>
-
- <section id='enabling-gobject-introspection-support'>
- <title>Enabling GObject Introspection Support</title>
-
- <para>
- <ulink url='https://wiki.gnome.org/Projects/GObjectIntrospection'>GObject introspection</ulink>
- is the standard mechanism for accessing GObject-based software
- from runtime environments.
- GObject is a feature of the GLib library that provides an object
- framework for the GNOME desktop and related software.
- GObject Introspection adds information to GObject that allows
- objects created within it to be represented across different
- programming languages.
- If you want to construct GStreamer pipelines using Python, or
- control UPnP infrastructure using Javascript and GUPnP,
- GObject introspection is the only way to do it.
- </para>
-
- <para>
- This section describes the Yocto Project support for generating
- and packaging GObject introspection data.
- GObject introspection data is a description of the
- API provided by libraries built on top of GLib framework,
- and, in particular, that framework's GObject mechanism.
- GObject Introspection Repository (GIR) files go to
- <filename>-dev</filename> packages,
- <filename>typelib</filename> files go to main packages as they
- are packaged together with libraries that are introspected.
- </para>
-
- <para>
- The data is generated when building such a library, by linking
- the library with a small executable binary that asks the library
- to describe itself, and then executing the binary and
- processing its output.
- </para>
-
- <para>
- Generating this data in a cross-compilation environment
- is difficult because the library is produced for the target
- architecture, but its code needs to be executed on the build host.
- This problem is solved with the OpenEmbedded build system by
- running the code through QEMU, which allows precisely that.
- Unfortunately, QEMU does not always work perfectly as mentioned
- in the
- "<link linkend='known-issues'>Known Issues</link>" section.
- </para>
-
- <section id='enabling-the-generation-of-introspection-data'>
- <title>Enabling the Generation of Introspection Data</title>
-
- <para>
- Enabling the generation of introspection data (GIR files)
- in your library package involves the following:
- <orderedlist>
- <listitem><para>
- Inherit the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-gobject-introspection'><filename>gobject-introspection</filename></ulink>
- class.
- </para></listitem>
- <listitem><para>
- Make sure introspection is not disabled anywhere in
- the recipe or from anything the recipe includes.
- Also, make sure that "gobject-introspection-data" is
- not in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_FEATURES_BACKFILL_CONSIDERED'><filename>DISTRO_FEATURES_BACKFILL_CONSIDERED</filename></ulink>
- and that "qemu-usermode" is not in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE_FEATURES_BACKFILL_CONSIDERED'><filename>MACHINE_FEATURES_BACKFILL_CONSIDERED</filename></ulink>.
- If either of these conditions exist, nothing will
- happen.
- </para></listitem>
- <listitem><para>
- Try to build the recipe.
- If you encounter build errors that look like
- something is unable to find
- <filename>.so</filename> libraries, check where these
- libraries are located in the source tree and add
- the following to the recipe:
- <literallayout class='monospaced'>
- GIR_EXTRA_LIBS_PATH = "${B}/<replaceable>something</replaceable>/.libs"
- </literallayout>
- <note>
- See recipes in the <filename>oe-core</filename>
- repository that use that
- <filename>GIR_EXTRA_LIBS_PATH</filename> variable
- as an example.
- </note>
- </para></listitem>
- <listitem><para>
- Look for any other errors, which probably mean that
- introspection support in a package is not entirely
- standard, and thus breaks down in a cross-compilation
- environment.
- For such cases, custom-made fixes are needed.
- A good place to ask and receive help in these cases
- is the
- <ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Yocto Project mailing lists</ulink>.
- </para></listitem>
- </orderedlist>
- <note>
- Using a library that no longer builds against the latest
- Yocto Project release and prints introspection related
- errors is a good candidate for the previous procedure.
- </note>
- </para>
- </section>
-
- <section id='disabling-the-generation-of-introspection-data'>
- <title>Disabling the Generation of Introspection Data</title>
-
- <para>
- You might find that you do not want to generate
- introspection data.
- Or, perhaps QEMU does not work on your build host and
- target architecture combination.
- If so, you can use either of the following methods to
- disable GIR file generations:
- <itemizedlist>
- <listitem><para>
- Add the following to your distro configuration:
- <literallayout class='monospaced'>
- DISTRO_FEATURES_BACKFILL_CONSIDERED = "gobject-introspection-data"
- </literallayout>
- Adding this statement disables generating
- introspection data using QEMU but will still enable
- building introspection tools and libraries
- (i.e. building them does not require the use of QEMU).
- </para></listitem>
- <listitem><para>
- Add the following to your machine configuration:
- <literallayout class='monospaced'>
- MACHINE_FEATURES_BACKFILL_CONSIDERED = "qemu-usermode"
- </literallayout>
- Adding this statement disables the use of QEMU
- when building packages for your machine.
- Currently, this feature is used only by introspection
- recipes and has the same effect as the previously
- described option.
- <note>
- Future releases of the Yocto Project might have
- other features affected by this option.
- </note>
- </para></listitem>
- </itemizedlist>
- If you disable introspection data, you can still
- obtain it through other means such as copying the data
- from a suitable sysroot, or by generating it on the
- target hardware.
- The OpenEmbedded build system does not currently
- provide specific support for these techniques.
- </para>
- </section>
-
- <section id='testing-that-introspection-works-in-an-image'>
- <title>Testing that Introspection Works in an Image</title>
-
- <para>
- Use the following procedure to test if generating
- introspection data is working in an image:
- <orderedlist>
- <listitem><para>
- Make sure that "gobject-introspection-data" is not in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_FEATURES_BACKFILL_CONSIDERED'><filename>DISTRO_FEATURES_BACKFILL_CONSIDERED</filename></ulink>
- and that "qemu-usermode" is not in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE_FEATURES_BACKFILL_CONSIDERED'><filename>MACHINE_FEATURES_BACKFILL_CONSIDERED</filename></ulink>.
- </para></listitem>
- <listitem><para>
- Build <filename>core-image-sato</filename>.
- </para></listitem>
- <listitem><para>
- Launch a Terminal and then start Python in the
- terminal.
- </para></listitem>
- <listitem><para>
- Enter the following in the terminal:
- <literallayout class='monospaced'>
- >>> from gi.repository import GLib
- >>> GLib.get_host_name()
- </literallayout>
- </para></listitem>
- <listitem><para>
- For something a little more advanced, enter the
- following:
- <literallayout class='monospaced'>
- http://python-gtk-3-tutorial.readthedocs.org/en/latest/introduction.html
- </literallayout>
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='known-issues'>
- <title>Known Issues</title>
-
- <para>
- The following know issues exist for
- GObject Introspection Support:
- <itemizedlist>
- <listitem><para>
- <filename>qemu-ppc64</filename> immediately crashes.
- Consequently, you cannot build introspection data on
- that architecture.
- </para></listitem>
- <listitem><para>
- x32 is not supported by QEMU.
- Consequently, introspection data is disabled.
- </para></listitem>
- <listitem><para>
- musl causes transient GLib binaries to crash on
- assertion failures.
- Consequently, generating introspection data is
- disabled.
- </para></listitem>
- <listitem><para>
- Because QEMU is not able to run the binaries correctly,
- introspection is disabled for some specific packages
- under specific architectures (e.g.
- <filename>gcr</filename>,
- <filename>libsecret</filename>, and
- <filename>webkit</filename>).
- </para></listitem>
- <listitem><para>
- QEMU usermode might not work properly when running
- 64-bit binaries under 32-bit host machines.
- In particular, "qemumips64" is known to not work under
- i686.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
- </section>
-
- <section id='dev-optionally-using-an-external-toolchain'>
- <title>Optionally Using an External Toolchain</title>
-
- <para>
- You might want to use an external toolchain as part of your
- development.
- If this is the case, the fundamental steps you need to accomplish
- are as follows:
- <itemizedlist>
- <listitem><para>
- Understand where the installed toolchain resides.
- For cases where you need to build the external toolchain,
- you would need to take separate steps to build and install
- the toolchain.
- </para></listitem>
- <listitem><para>
- Make sure you add the layer that contains the toolchain to
- your <filename>bblayers.conf</filename> file through the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'><filename>BBLAYERS</filename></ulink>
- variable.
- </para></listitem>
- <listitem><para>
- Set the <filename>EXTERNAL_TOOLCHAIN</filename>
- variable in your <filename>local.conf</filename> file
- to the location in which you installed the toolchain.
- </para></listitem>
- </itemizedlist>
- A good example of an external toolchain used with the Yocto Project
- is <trademark class='registered'>Mentor Graphics</trademark>
- Sourcery G++ Toolchain.
- You can see information on how to use that particular layer in the
- <filename>README</filename> file at
- <ulink url='http://github.com/MentorEmbedded/meta-sourcery/'></ulink>.
- You can find further information by reading about the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TCMODE'><filename>TCMODE</filename></ulink>
- variable in the Yocto Project Reference Manual's variable glossary.
- </para>
- </section>
-
- <section id='creating-partitioned-images-using-wic'>
- <title>Creating Partitioned Images Using Wic</title>
-
- <para>
- Creating an image for a particular hardware target using the
- OpenEmbedded build system does not necessarily mean you can boot
- that image as is on your device.
- Physical devices accept and boot images in various ways depending
- on the specifics of the device.
- Usually, information about the hardware can tell you what image
- format the device requires.
- Should your device require multiple partitions on an SD card, flash,
- or an HDD, you can use the OpenEmbedded Image Creator,
- Wic, to create the properly partitioned image.
- </para>
-
- <para>
- The <filename>wic</filename> command generates partitioned
- images from existing OpenEmbedded build artifacts.
- Image generation is driven by partitioning commands
- contained in an Openembedded kickstart file
- (<filename>.wks</filename>) specified either directly on
- the command line or as one of a selection of canned
- kickstart files as shown with the
- <filename>wic list images</filename> command in the
- "<link linkend='using-a-provided-kickstart-file'>Using an Existing Kickstart File</link>"
- section.
- When you apply the command to a given set of build
- artifacts, the result is an image or set of images that
- can be directly written onto media and used on a particular
- system.
- <note>
- For a kickstart file reference, see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#ref-kickstart'>OpenEmbedded Kickstart (<filename>.wks</filename>) Reference</ulink>"
- Chapter in the Yocto Project Reference Manual.
- </note>
- </para>
-
- <para>
- The <filename>wic</filename> command and the infrastructure
- it is based on is by definition incomplete.
- The purpose of the command is to allow the generation of
- customized images, and as such, was designed to be
- completely extensible through a plugin interface.
- See the
- "<link linkend='wic-using-the-wic-plugin-interface'>Using the Wic PlugIn Interface</link>"
- section for information on these plugins.
- </para>
-
- <para>
- This section provides some background information on Wic,
- describes what you need to have in
- place to run the tool, provides instruction on how to use
- the Wic utility, provides information on using the Wic plugins
- interface, and provides several examples that show how to use
- Wic.
- </para>
-
- <section id='wic-background'>
- <title>Background</title>
-
- <para>
- This section provides some background on the Wic utility.
- While none of this information is required to use
- Wic, you might find it interesting.
- <itemizedlist>
- <listitem><para>
- The name "Wic" is derived from OpenEmbedded
- Image Creator (oeic).
- The "oe" diphthong in "oeic" was promoted to the
- letter "w", because "oeic" is both difficult to
- remember and to pronounce.
- </para></listitem>
- <listitem><para>
- Wic is loosely based on the
- Meego Image Creator (<filename>mic</filename>)
- framework.
- The Wic implementation has been
- heavily modified to make direct use of OpenEmbedded
- build artifacts instead of package installation and
- configuration, which are already incorporated within
- the OpenEmbedded artifacts.
- </para></listitem>
- <listitem><para>
- Wic is a completely independent
- standalone utility that initially provides
- easier-to-use and more flexible replacements for an
- existing functionality in OE-Core's
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-image-live'><filename>image-live</filename></ulink>
- class.
- The difference between Wic and those examples is
- that with Wic the functionality of those scripts is
- implemented by a general-purpose partitioning language,
- which is based on Redhat kickstart syntax.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='wic-requirements'>
- <title>Requirements</title>
-
- <para>
- In order to use the Wic utility with the OpenEmbedded Build
- system, your system needs to meet the following
- requirements:
- <itemizedlist>
- <listitem><para>
- The Linux distribution on your development host must
- support the Yocto Project.
- See the
- "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>"
- section in the Yocto Project Reference Manual for
- the list of distributions that support the
- Yocto Project.
- </para></listitem>
- <listitem><para>
- The standard system utilities, such as
- <filename>cp</filename>, must be installed on your
- development host system.
- </para></listitem>
- <listitem><para>
- You must have sourced the build environment
- setup script (i.e.
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>)
- found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- </para></listitem>
- <listitem><para>
- You need to have the build artifacts already
- available, which typically means that you must
- have already created an image using the
- Openembedded build system (e.g.
- <filename>core-image-minimal</filename>).
- While it might seem redundant to generate an image
- in order to create an image using
- Wic, the current version of
- Wic requires the artifacts
- in the form generated by the OpenEmbedded build
- system.
- </para></listitem>
- <listitem><para>
- You must build several native tools, which are
- built to run on the build system:
- <literallayout class='monospaced'>
- $ bitbake parted-native dosfstools-native mtools-native
- </literallayout>
- </para></listitem>
- <listitem><para>
- Include "wic" as part of the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></ulink>
- variable.
- </para></listitem>
- <listitem><para>
- Include the name of the
- <ulink url='&YOCTO_DOCS_REF_URL;#openembedded-kickstart-wks-reference'>wic kickstart file</ulink>
- as part of the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-WKS_FILE'><filename>WKS_FILE</filename></ulink>
- variable
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='wic-getting-help'>
- <title>Getting Help</title>
-
- <para>
- You can get general help for the <filename>wic</filename>
- command by entering the <filename>wic</filename> command
- by itself or by entering the command with a help argument
- as follows:
- <literallayout class='monospaced'>
- $ wic -h
- $ wic --help
- $ wic help
- </literallayout>
- </para>
-
- <para>
- Currently, Wic supports seven commands:
- <filename>cp</filename>, <filename>create</filename>,
- <filename>help</filename>, <filename>list</filename>,
- <filename>ls</filename>, <filename>rm</filename>, and
- <filename>write</filename>.
- You can get help for all these commands except "help" by
- using the following form:
- <literallayout class='monospaced'>
- $ wic help <replaceable>command</replaceable>
- </literallayout>
- For example, the following command returns help for the
- <filename>write</filename> command:
- <literallayout class='monospaced'>
- $ wic help write
- </literallayout>
- </para>
-
- <para>
- Wic supports help for three topics:
- <filename>overview</filename>,
- <filename>plugins</filename>, and
- <filename>kickstart</filename>.
- You can get help for any topic using the following form:
- <literallayout class='monospaced'>
- $ wic help <replaceable>topic</replaceable>
- </literallayout>
- For example, the following returns overview help for Wic:
- <literallayout class='monospaced'>
- $ wic help overview
- </literallayout>
- </para>
-
- <para>
- One additional level of help exists for Wic.
- You can get help on individual images through the
- <filename>list</filename> command.
- You can use the <filename>list</filename> command to return the
- available Wic images as follows:
- <literallayout class='monospaced'>
- $ wic list images
- genericx86 Create an EFI disk image for genericx86*
- beaglebone-yocto Create SD card image for Beaglebone
- edgerouter Create SD card image for Edgerouter
- qemux86-directdisk Create a qemu machine 'pcbios' direct disk image
- directdisk-gpt Create a 'pcbios' direct disk image
- mkefidisk Create an EFI disk image
- directdisk Create a 'pcbios' direct disk image
- systemd-bootdisk Create an EFI disk image with systemd-boot
- mkhybridiso Create a hybrid ISO image
- sdimage-bootpart Create SD card image with a boot partition
- directdisk-multi-rootfs Create multi rootfs image using rootfs plugin
- directdisk-bootloader-config Create a 'pcbios' direct disk image with custom bootloader config
- </literallayout>
- Once you know the list of available Wic images, you can use
- <filename>help</filename> with the command to get help on a
- particular image.
- For example, the following command returns help on the
- "beaglebone-yocto" image:
- <literallayout class='monospaced'>
- $ wic list beaglebone-yocto help
-
-
- Creates a partitioned SD card image for Beaglebone.
- Boot files are located in the first vfat partition.
- </literallayout>
- </para>
- </section>
-
- <section id='operational-modes'>
- <title>Operational Modes</title>
-
- <para>
- You can use Wic in two different
- modes, depending on how much control you need for
- specifying the Openembedded build artifacts that are
- used for creating the image: Raw and Cooked:
- <itemizedlist>
- <listitem><para>
- <emphasis>Raw Mode:</emphasis>
- You explicitly specify build artifacts through
- Wic command-line arguments.
- </para></listitem>
- <listitem><para>
- <emphasis>Cooked Mode:</emphasis>
- The current
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
- setting and image name are used to automatically
- locate and provide the build artifacts.
- You just supply a kickstart file and the name
- of the image from which to use artifacts.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Regardless of the mode you use, you need to have the build
- artifacts ready and available.
- </para>
-
- <section id='raw-mode'>
- <title>Raw Mode</title>
-
- <para>
- Running Wic in raw mode allows you to specify all the
- partitions through the <filename>wic</filename>
- command line.
- The primary use for raw mode is if you have built
- your kernel outside of the Yocto Project
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- In other words, you can point to arbitrary kernel,
- root filesystem locations, and so forth.
- Contrast this behavior with cooked mode where Wic
- looks in the Build Directory (e.g.
- <filename>tmp/deploy/images/</filename><replaceable>machine</replaceable>).
- </para>
-
- <para>
- The general form of the
- <filename>wic</filename> command in raw mode is:
- <literallayout class='monospaced'>
- $ wic create <replaceable>wks_file</replaceable> <replaceable>options</replaceable> ...
-
- Where:
-
- <replaceable>wks_file</replaceable>:
- An OpenEmbedded kickstart file. You can provide
- your own custom file or use a file from a set of
- existing files as described by further options.
-
- optional arguments:
- -h, --help show this help message and exit
- -o <replaceable>OUTDIR</replaceable>, --outdir <replaceable>OUTDIR</replaceable>
- name of directory to create image in
- -e <replaceable>IMAGE_NAME</replaceable>, --image-name <replaceable>IMAGE_NAME</replaceable>
- name of the image to use the artifacts from e.g. core-
- image-sato
- -r <replaceable>ROOTFS_DIR</replaceable>, --rootfs-dir <replaceable>ROOTFS_DIR</replaceable>
- path to the /rootfs dir to use as the .wks rootfs
- source
- -b <replaceable>BOOTIMG_DIR</replaceable>, --bootimg-dir <replaceable>BOOTIMG_DIR</replaceable>
- path to the dir containing the boot artifacts (e.g.
- /EFI or /syslinux dirs) to use as the .wks bootimg
- source
- -k <replaceable>KERNEL_DIR</replaceable>, --kernel-dir <replaceable>KERNEL_DIR</replaceable>
- path to the dir containing the kernel to use in the
- .wks bootimg
- -n <replaceable>NATIVE_SYSROOT</replaceable>, --native-sysroot <replaceable>NATIVE_SYSROOT</replaceable>
- path to the native sysroot containing the tools to use
- to build the image
- -s, --skip-build-check
- skip the build check
- -f, --build-rootfs build rootfs
- -c {gzip,bzip2,xz}, --compress-with {gzip,bzip2,xz}
- compress image with specified compressor
- -m, --bmap generate .bmap
- --no-fstab-update Do not change fstab file.
- -v <replaceable>VARS_DIR</replaceable>, --vars <replaceable>VARS_DIR</replaceable>
- directory with &lt;image&gt;.env files that store bitbake
- variables
- -D, --debug output debug information
- </literallayout>
- <note>
- You do not need root privileges to run
- Wic.
- In fact, you should not run as root when using the
- utility.
- </note>
- </para>
- </section>
-
- <section id='cooked-mode'>
- <title>Cooked Mode</title>
-
- <para>
- Running Wic in cooked mode leverages off artifacts in
- the Build Directory.
- In other words, you do not have to specify kernel or
- root filesystem locations as part of the command.
- All you need to provide is a kickstart file and the
- name of the image from which to use artifacts by using
- the "-e" option.
- Wic looks in the Build Directory (e.g.
- <filename>tmp/deploy/images/</filename><replaceable>machine</replaceable>)
- for artifacts.
- </para>
-
- <para>
- The general form of the <filename>wic</filename>
- command using Cooked Mode is as follows:
- <literallayout class='monospaced'>
- $ wic create <replaceable>wks_file</replaceable> -e <replaceable>IMAGE_NAME</replaceable>
-
- Where:
-
- <replaceable>wks_file</replaceable>:
- An OpenEmbedded kickstart file. You can provide
- your own custom file or use a file from a set of
- existing files provided with the Yocto Project
- release.
-
- required argument:
- -e <replaceable>IMAGE_NAME</replaceable>, --image-name <replaceable>IMAGE_NAME</replaceable>
- name of the image to use the artifacts from e.g. core-
- image-sato
- </literallayout>
- </para>
- </section>
- </section>
-
- <section id='using-a-provided-kickstart-file'>
- <title>Using an Existing Kickstart File</title>
-
- <para>
- If you do not want to create your own kickstart file, you
- can use an existing file provided by the Wic installation.
- As shipped, kickstart files can be found in the
- Yocto Project
- <ulink url='&YOCTO_DOCS_OM_URL;#source-repositories'>Source Repositories</ulink>
- in the following two locations:
- <literallayout class='monospaced'>
- poky/meta-yocto-bsp/wic
- poky/scripts/lib/wic/canned-wks
- </literallayout>
- Use the following command to list the available kickstart
- files:
- <literallayout class='monospaced'>
- $ wic list images
- genericx86 Create an EFI disk image for genericx86*
- beaglebone-yocto Create SD card image for Beaglebone
- edgerouter Create SD card image for Edgerouter
- qemux86-directdisk Create a qemu machine 'pcbios' direct disk image
- directdisk-gpt Create a 'pcbios' direct disk image
- mkefidisk Create an EFI disk image
- directdisk Create a 'pcbios' direct disk image
- systemd-bootdisk Create an EFI disk image with systemd-boot
- mkhybridiso Create a hybrid ISO image
- sdimage-bootpart Create SD card image with a boot partition
- directdisk-multi-rootfs Create multi rootfs image using rootfs plugin
- directdisk-bootloader-config Create a 'pcbios' direct disk image with custom bootloader config
- </literallayout>
- When you use an existing file, you do not have to use the
- <filename>.wks</filename> extension.
- Here is an example in Raw Mode that uses the
- <filename>directdisk</filename> file:
- <literallayout class='monospaced'>
- $ wic create directdisk -r <replaceable>rootfs_dir</replaceable> -b <replaceable>bootimg_dir</replaceable> \
- -k <replaceable>kernel_dir</replaceable> -n <replaceable>native_sysroot</replaceable>
- </literallayout>
- </para>
-
- <para>
- Here are the actual partition language commands
- used in the <filename>genericx86.wks</filename> file to
- generate an image:
- <literallayout class='monospaced'>
- # short-description: Create an EFI disk image for genericx86*
- # long-description: Creates a partitioned EFI disk image for genericx86* machines
- part /boot --source bootimg-efi --sourceparams="loader=grub-efi" --ondisk sda --label msdos --active --align 1024
- part / --source rootfs --ondisk sda --fstype=ext4 --label platform --align 1024 --use-uuid
- part swap --ondisk sda --size 44 --label swap1 --fstype=swap
-
- bootloader --ptable gpt --timeout=5 --append="rootfstype=ext4 console=ttyS0,115200 console=tty0"
- </literallayout>
- </para>
- </section>
-
- <section id='wic-using-the-wic-plugin-interface'>
- <title>Using the Wic Plugin Interface</title>
-
- <para>
- You can extend and specialize Wic functionality by using
- Wic plugins.
- This section explains the Wic plugin interface.
- <note>
- Wic plugins consist of "source" and "imager" plugins.
- Imager plugins are beyond the scope of this section.
- </note>
- </para>
-
- <para>
- Source plugins provide a mechanism to customize partition
- content during the Wic image generation process.
- You can use source plugins to map values that you specify
- using <filename>--source</filename> commands in kickstart
- files (i.e. <filename>*.wks</filename>) to a plugin
- implementation used to populate a given partition.
- <note>
- If you use plugins that have build-time dependencies
- (e.g. native tools, bootloaders, and so forth)
- when building a Wic image, you need to specify those
- dependencies using the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-WKS_FILE_DEPENDS'><filename>WKS_FILE_DEPENDS</filename></ulink>
- variable.
- </note>
- </para>
-
- <para>
- Source plugins are subclasses defined in plugin files.
- As shipped, the Yocto Project provides several plugin
- files.
- You can see the source plugin files that ship with the
- Yocto Project
- <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/scripts/lib/wic/plugins/source'>here</ulink>.
- Each of these plugin files contains source plugins that
- are designed to populate a specific Wic image partition.
- </para>
-
- <para>
- Source plugins are subclasses of the
- <filename>SourcePlugin</filename> class, which is
- defined in the
- <filename>poky/scripts/lib/wic/pluginbase.py</filename>
- file.
- For example, the <filename>BootimgEFIPlugin</filename>
- source plugin found in the
- <filename>bootimg-efi.py</filename> file is a subclass of
- the <filename>SourcePlugin</filename> class, which is found
- in the <filename>pluginbase.py</filename> file.
- </para>
-
- <para>
- You can also implement source plugins in a layer outside
- of the Source Repositories (external layer).
- To do so, be sure that your plugin files are located in
- a directory whose path is
- <filename>scripts/lib/wic/plugins/source/</filename>
- within your external layer.
- When the plugin files are located there, the source
- plugins they contain are made available to Wic.
- </para>
-
- <para>
- When the Wic implementation needs to invoke a
- partition-specific implementation, it looks for the plugin
- with the same name as the <filename>--source</filename>
- parameter used in the kickstart file given to that
- partition.
- For example, if the partition is set up using the following
- command in a kickstart file:
- <literallayout class='monospaced'>
- part /boot --source bootimg-pcbios --ondisk sda --label boot --active --align 1024
- </literallayout>
- The methods defined as class members of the matching
- source plugin (i.e. <filename>bootimg-pcbios</filename>)
- in the <filename>bootimg-pcbios.py</filename> plugin file
- are used.
- </para>
-
- <para>
- To be more concrete, here is the corresponding plugin
- definition from the <filename>bootimg-pcbios.py</filename>
- file for the previous command along with an example
- method called by the Wic implementation when it needs to
- prepare a partition using an implementation-specific
- function:
- <literallayout class='monospaced'>
- .
- .
- .
- class BootimgPcbiosPlugin(SourcePlugin):
- """
- Create MBR boot partition and install syslinux on it.
- """
-
- name = 'bootimg-pcbios'
- .
- .
- .
- @classmethod
- def do_prepare_partition(cls, part, source_params, creator, cr_workdir,
- oe_builddir, bootimg_dir, kernel_dir,
- rootfs_dir, native_sysroot):
- """
- Called to do the actual content population for a partition i.e. it
- 'prepares' the partition to be incorporated into the image.
- In this case, prepare content for legacy bios boot partition.
- """
- .
- .
- .
- </literallayout>
- If a subclass (plugin) itself does not implement a
- particular function, Wic locates and uses the default
- version in the superclass.
- It is for this reason that all source plugins are derived
- from the <filename>SourcePlugin</filename> class.
- </para>
-
- <para>
- The <filename>SourcePlugin</filename> class defined in
- the <filename>pluginbase.py</filename> file defines
- a set of methods that source plugins can implement or
- override.
- Any plugins (subclass of
- <filename>SourcePlugin</filename>) that do not implement
- a particular method inherit the implementation of the
- method from the <filename>SourcePlugin</filename> class.
- For more information, see the
- <filename>SourcePlugin</filename> class in the
- <filename>pluginbase.py</filename> file for details:
- </para>
-
- <para>
- The following list describes the methods implemented in the
- <filename>SourcePlugin</filename> class:
- <itemizedlist>
- <listitem><para>
- <emphasis><filename>do_prepare_partition()</filename>:</emphasis>
- Called to populate a partition with actual content.
- In other words, the method prepares the final
- partition image that is incorporated into the
- disk image.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>do_configure_partition()</filename>:</emphasis>
- Called before
- <filename>do_prepare_partition()</filename> to
- create custom configuration files for a partition
- (e.g. syslinux or grub configuration files).
- </para></listitem>
- <listitem><para>
- <emphasis><filename>do_install_disk()</filename>:</emphasis>
- Called after all partitions have been prepared and
- assembled into a disk image.
- This method provides a hook to allow finalization
- of a disk image (e.g. writing an MBR).
- </para></listitem>
- <listitem><para>
- <emphasis><filename>do_stage_partition()</filename>:</emphasis>
- Special content-staging hook called before
- <filename>do_prepare_partition()</filename>.
- This method is normally empty.</para>
-
- <para>Typically, a partition just uses the passed-in
- parameters (e.g. the unmodified value of
- <filename>bootimg_dir</filename>).
- However, in some cases, things might need to be
- more tailored.
- As an example, certain files might additionally
- need to be taken from
- <filename>bootimg_dir + /boot</filename>.
- This hook allows those files to be staged in a
- customized fashion.
- <note>
- <filename>get_bitbake_var()</filename>
- allows you to access non-standard variables
- that you might want to use for this
- behavior.
- </note>
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- You can extend the source plugin mechanism.
- To add more hooks, create more source plugin methods
- within <filename>SourcePlugin</filename> and the
- corresponding derived subclasses.
- The code that calls the plugin methods uses the
- <filename>plugin.get_source_plugin_methods()</filename>
- function to find the method or methods needed by the call.
- Retrieval of those methods is accomplished by filling up
- a dict with keys that contain the method names of interest.
- On success, these will be filled in with the actual
- methods.
- See the Wic implementation for examples and details.
- </para>
- </section>
-
- <section id='wic-usage-examples'>
- <title>Examples</title>
-
- <para>
- This section provides several examples that show how to use
- the Wic utility.
- All the examples assume the list of requirements in the
- "<link linkend='wic-requirements'>Requirements</link>"
- section have been met.
- The examples assume the previously generated image is
- <filename>core-image-minimal</filename>.
- </para>
-
- <section id='generate-an-image-using-a-provided-kickstart-file'>
- <title>Generate an Image using an Existing Kickstart File</title>
-
- <para>
- This example runs in Cooked Mode and uses the
- <filename>mkefidisk</filename> kickstart file:
- <literallayout class='monospaced'>
- $ wic create mkefidisk -e core-image-minimal
- INFO: Building wic-tools...
- .
- .
- .
- INFO: The new image(s) can be found here:
- ./mkefidisk-201804191017-sda.direct
-
- The following build artifacts were used to create the image(s):
- ROOTFS_DIR: /home/stephano/build/master/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/rootfs
- BOOTIMG_DIR: /home/stephano/build/master/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share
- KERNEL_DIR: /home/stephano/build/master/build/tmp-glibc/deploy/images/qemux86
- NATIVE_SYSROOT: /home/stephano/build/master/build/tmp-glibc/work/i586-oe-linux/wic-tools/1.0-r0/recipe-sysroot-native
-
- INFO: The image(s) were created using OE kickstart file:
- /home/stephano/build/master/openembedded-core/scripts/lib/wic/canned-wks/mkefidisk.wks
- </literallayout>
- The previous example shows the easiest way to create
- an image by running in cooked mode and supplying
- a kickstart file and the "-e" option to point to the
- existing build artifacts.
- Your <filename>local.conf</filename> file needs to have
- the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
- variable set to the machine you are using, which is
- "qemux86" in this example.
- </para>
-
- <para>
- Once the image builds, the output provides image
- location, artifact use, and kickstart file information.
- <note>
- You should always verify the details provided in the
- output to make sure that the image was indeed
- created exactly as expected.
- </note>
- </para>
-
- <para>
- Continuing with the example, you can now write the
- image from the Build Directory onto a USB stick, or
- whatever media for which you built your image, and boot
- from the media.
- You can write the image by using
- <filename>bmaptool</filename> or
- <filename>dd</filename>:
- <literallayout class='monospaced'>
- $ oe-run-native bmaptool copy mkefidisk-201804191017-sda.direct /dev/sd<replaceable>X</replaceable>
- </literallayout>
- or
- <literallayout class='monospaced'>
- $ sudo dd if=mkefidisk-201804191017-sda.direct of=/dev/sd<replaceable>X</replaceable>
- </literallayout>
- <note>
- For more information on how to use the
- <filename>bmaptool</filename> to flash a device
- with an image, see the
- "<link linkend='flashing-images-using-bmaptool'>Flashing Images Using <filename>bmaptool</filename></link>"
- section.
- </note>
- </para>
- </section>
-
- <section id='using-a-modified-kickstart-file'>
- <title>Using a Modified Kickstart File</title>
-
- <para>
- Because partitioned image creation is driven by the
- kickstart file, it is easy to affect image creation by
- changing the parameters in the file.
- This next example demonstrates that through modification
- of the <filename>directdisk-gpt</filename> kickstart
- file.
- </para>
-
- <para>
- As mentioned earlier, you can use the command
- <filename>wic list images</filename> to show the list
- of existing kickstart files.
- The directory in which the
- <filename>directdisk-gpt.wks</filename> file resides is
- <filename>scripts/lib/image/canned-wks/</filename>,
- which is located in the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- (e.g. <filename>poky</filename>).
- Because available files reside in this directory,
- you can create and add your own custom files to the
- directory.
- Subsequent use of the
- <filename>wic list images</filename> command would then
- include your kickstart files.
- </para>
-
- <para>
- In this example, the existing
- <filename>directdisk-gpt</filename> file already does
- most of what is needed.
- However, for the hardware in this example, the image
- will need to boot from <filename>sdb</filename> instead
- of <filename>sda</filename>, which is what the
- <filename>directdisk-gpt</filename> kickstart file
- uses.
- </para>
-
- <para>
- The example begins by making a copy of the
- <filename>directdisk-gpt.wks</filename> file in the
- <filename>scripts/lib/image/canned-wks</filename>
- directory and then by changing the lines that specify
- the target disk from which to boot.
- <literallayout class='monospaced'>
- $ cp /home/stephano/poky/scripts/lib/wic/canned-wks/directdisk-gpt.wks \
- /home/stephano/poky/scripts/lib/wic/canned-wks/directdisksdb-gpt.wks
- </literallayout>
- Next, the example modifies the
- <filename>directdisksdb-gpt.wks</filename> file and
- changes all instances of
- "<filename>--ondisk sda</filename>" to
- "<filename>--ondisk sdb</filename>".
- The example changes the following two lines and leaves
- the remaining lines untouched:
- <literallayout class='monospaced'>
- part /boot --source bootimg-pcbios --ondisk sdb --label boot --active --align 1024
- part / --source rootfs --ondisk sdb --fstype=ext4 --label platform --align 1024 --use-uuid
- </literallayout>
- Once the lines are changed, the example generates the
- <filename>directdisksdb-gpt</filename> image.
- The command points the process at the
- <filename>core-image-minimal</filename> artifacts for
- the Next Unit of Computing (nuc)
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
- the <filename>local.conf</filename>.
- <literallayout class='monospaced'>
- $ wic create directdisksdb-gpt -e core-image-minimal
- INFO: Building wic-tools...
- .
- .
- .
- Initialising tasks: 100% |#######################################| Time: 0:00:01
- NOTE: Executing SetScene Tasks
- NOTE: Executing RunQueue Tasks
- NOTE: Tasks Summary: Attempted 1161 tasks of which 1157 didn't need to be rerun and all succeeded.
- INFO: Creating image(s)...
-
- INFO: The new image(s) can be found here:
- ./directdisksdb-gpt-201710090938-sdb.direct
-
- The following build artifacts were used to create the image(s):
- ROOTFS_DIR: /home/stephano/build/master/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/rootfs
- BOOTIMG_DIR: /home/stephano/build/master/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share
- KERNEL_DIR: /home/stephano/build/master/build/tmp-glibc/deploy/images/qemux86
- NATIVE_SYSROOT: /home/stephano/build/master/build/tmp-glibc/work/i586-oe-linux/wic-tools/1.0-r0/recipe-sysroot-native
-
- INFO: The image(s) were created using OE kickstart file:
- /home/stephano/poky/scripts/lib/wic/canned-wks/directdisksdb-gpt.wks
- </literallayout>
- Continuing with the example, you can now directly
- <filename>dd</filename> the image to a USB stick, or
- whatever media for which you built your image,
- and boot the resulting media:
- <literallayout class='monospaced'>
- $ sudo dd if=directdisksdb-gpt-201710090938-sdb.direct of=/dev/sdb
- 140966+0 records in
- 140966+0 records out
- 72174592 bytes (72 MB, 69 MiB) copied, 78.0282 s, 925 kB/s
- $ sudo eject /dev/sdb
- </literallayout>
- </para>
- </section>
-
- <section id='using-a-modified-kickstart-file-and-running-in-raw-mode'>
- <title>Using a Modified Kickstart File and Running in Raw Mode</title>
-
- <para>
- This next example manually specifies each build artifact
- (runs in Raw Mode) and uses a modified kickstart file.
- The example also uses the <filename>-o</filename> option
- to cause Wic to create the output
- somewhere other than the default output directory,
- which is the current directory:
- <literallayout class='monospaced'>
- $ wic create /home/stephano/my_yocto/test.wks -o /home/stephano/testwic \
- --rootfs-dir /home/stephano/build/master/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/rootfs \
- --bootimg-dir /home/stephano/build/master/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share \
- --kernel-dir /home/stephano/build/master/build/tmp/deploy/images/qemux86 \
- --native-sysroot /home/stephano/build/master/build/tmp/work/i586-poky-linux/wic-tools/1.0-r0/recipe-sysroot-native
-
- INFO: Creating image(s)...
-
- INFO: The new image(s) can be found here:
- /home/stephano/testwic/test-201710091445-sdb.direct
-
- The following build artifacts were used to create the image(s):
- ROOTFS_DIR: /home/stephano/build/master/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/rootfs
- BOOTIMG_DIR: /home/stephano/build/master/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share
- KERNEL_DIR: /home/stephano/build/master/build/tmp-glibc/deploy/images/qemux86
- NATIVE_SYSROOT: /home/stephano/build/master/build/tmp-glibc/work/i586-oe-linux/wic-tools/1.0-r0/recipe-sysroot-native
-
- INFO: The image(s) were created using OE kickstart file:
- /home/stephano/my_yocto/test.wks
- </literallayout>
- For this example,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
- did not have to be specified in the
- <filename>local.conf</filename> file since the
- artifact is manually specified.
- </para>
- </section>
-
- <section id='using-wic-to-manipulate-an-image'>
- <title>Using Wic to Manipulate an Image</title>
-
- <para>
- Wic image manipulation allows you to shorten turnaround
- time during image development.
- For example, you can use Wic to delete the kernel partition
- of a Wic image and then insert a newly built kernel.
- This saves you time from having to rebuild the entire image
- each time you modify the kernel.
- <note>
- In order to use Wic to manipulate a Wic image as in
- this example, your development machine must have the
- <filename>mtools</filename> package installed.
- </note>
- </para>
-
- <para>
- The following example examines the contents of the Wic
- image, deletes the existing kernel, and then inserts a
- new kernel:
- <orderedlist>
- <listitem><para>
- <emphasis>List the Partitions:</emphasis>
- Use the <filename>wic ls</filename> command to list
- all the partitions in the Wic image:
- <literallayout class='monospaced'>
- $ wic ls tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic
- Num Start End Size Fstype
- 1 1048576 25041919 23993344 fat16
- 2 25165824 72157183 46991360 ext4
- </literallayout>
- The previous output shows two partitions in the
- <filename>core-image-minimal-qemux86.wic</filename>
- image.
- </para></listitem>
- <listitem><para>
- <emphasis>Examine a Particular Partition:</emphasis>
- Use the <filename>wic ls</filename> command again
- but in a different form to examine a particular
- partition.
- <note>
- You can get command usage on any Wic command
- using the following form:
- <literallayout class='monospaced'>
- $ wic help <replaceable>command</replaceable>
- </literallayout>
- For example, the following command shows you
- the various ways to use the
- <filename>wic ls</filename> command:
- <literallayout class='monospaced'>
- $ wic help ls
- </literallayout>
- </note>
- The following command shows what is in Partition
- one:
- <literallayout class='monospaced'>
- $ wic ls tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1
- Volume in drive : is boot
- Volume Serial Number is E894-1809
- Directory for ::/
-
- libcom32 c32 186500 2017-10-09 16:06
- libutil c32 24148 2017-10-09 16:06
- syslinux cfg 220 2017-10-09 16:06
- vesamenu c32 27104 2017-10-09 16:06
- vmlinuz 6904608 2017-10-09 16:06
- 5 files 7 142 580 bytes
- 16 582 656 bytes free
- </literallayout>
- The previous output shows five files, with the
- <filename>vmlinuz</filename> being the kernel.
- <note>
- If you see the following error, you need to
- update or create a
- <filename>~/.mtoolsrc</filename> file and
- be sure to have the line “mtools_skip_check=1“
- in the file.
- Then, run the Wic command again:
- <literallayout class='monospaced'>
- ERROR: _exec_cmd: /usr/bin/mdir -i /tmp/wic-parttfokuwra ::/ returned '1' instead of 0
- output: Total number of sectors (47824) not a multiple of sectors per track (32)!
- Add mtools_skip_check=1 to your .mtoolsrc file to skip this test
- </literallayout>
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Remove the Old Kernel:</emphasis>
- Use the <filename>wic rm</filename> command to
- remove the <filename>vmlinuz</filename> file
- (kernel):
- <literallayout class='monospaced'>
- $ wic rm tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1/vmlinuz
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Add In the New Kernel:</emphasis>
- Use the <filename>wic cp</filename> command to
- add the updated kernel to the Wic image.
- Depending on how you built your kernel, it could
- be in different places.
- If you used <filename>devtool</filename> and
- an SDK to build your kernel, it resides in the
- <filename>tmp/work</filename> directory of the
- extensible SDK.
- If you used <filename>make</filename> to build the
- kernel, the kernel will be in the
- <filename>workspace/sources</filename> area.
- </para>
-
- <para>The following example assumes
- <filename>devtool</filename> was used to build
- the kernel:
- <literallayout class='monospaced'>
- cp ~/poky_sdk/tmp/work/qemux86-poky-linux/linux-yocto/4.12.12+git999-r0/linux-yocto-4.12.12+git999/arch/x86/boot/bzImage \
- ~/poky/build/tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1/vmlinuz
- </literallayout>
- Once the new kernel is added back into the image,
- you can use the <filename>dd</filename>
- command or
- <link linkend='flashing-images-using-bmaptool'><filename>bmaptool</filename></link>
- to flash your wic image onto an SD card
- or USB stick and test your target.
- <note>
- Using <filename>bmaptool</filename> is
- generally 10 to 20 times faster than using
- <filename>dd</filename>.
- </note>
- </para></listitem>
- </orderedlist>
- </para>
- </section>
- </section>
- </section>
-
- <section id='flashing-images-using-bmaptool'>
- <title>Flashing Images Using <filename>bmaptool</filename></title>
-
- <para>
- A fast and easy way to flash an image to a bootable device
- is to use Bmaptool, which is integrated into the OpenEmbedded
- build system.
- Bmaptool is a generic tool that creates a file's block map (bmap)
- and then uses that map to copy the file.
- As compared to traditional tools such as dd or cp, Bmaptool
- can copy (or flash) large files like raw system image files
- much faster.
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- If you are using Ubuntu or Debian distributions, you
- can install the <filename>bmap-tools</filename> package
- using the following command and then use the tool
- without specifying <filename>PATH</filename> even from
- the root account:
- <literallayout class='monospaced'>
- $ sudo apt-get install bmap-tools
- </literallayout>
- </para></listitem>
- <listitem><para>
- If you are unable to install the
- <filename>bmap-tools</filename> package, you will
- need to build Bmaptool before using it.
- Use the following command:
- <literallayout class='monospaced'>
- $ bitbake bmap-tools-native
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
-
- <para>
- Following, is an example that shows how to flash a Wic image.
- Realize that while this example uses a Wic image, you can use
- Bmaptool to flash any type of image.
- Use these steps to flash an image using Bmaptool:
- <orderedlist>
- <listitem><para>
- <emphasis>Update your <filename>local.conf</filename> File:</emphasis>
- You need to have the following set in your
- <filename>local.conf</filename> file before building
- your image:
- <literallayout class='monospaced'>
- IMAGE_FSTYPES += "wic wic.bmap"
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Get Your Image:</emphasis>
- Either have your image ready (pre-built with the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></ulink>
- setting previously mentioned) or take the step to build
- the image:
- <literallayout class='monospaced'>
- $ bitbake <replaceable>image</replaceable>
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Flash the Device:</emphasis>
- Flash the device with the image by using Bmaptool
- depending on your particular setup.
- The following commands assume the image resides in the
- Build Directory's <filename>deploy/images/</filename>
- area:
- <itemizedlist>
- <listitem><para>
- If you have write access to the media, use this
- command form:
- <literallayout class='monospaced'>
- $ oe-run-native bmap-tools-native bmaptool copy <replaceable>build-directory</replaceable>/tmp/deploy/images/<replaceable>machine</replaceable>/<replaceable>image</replaceable>.wic /dev/sd<replaceable>X</replaceable>
- </literallayout>
- </para></listitem>
- <listitem><para>
- If you do not have write access to the media, set
- your permissions first and then use the same
- command form:
- <literallayout class='monospaced'>
- $ sudo chmod 666 /dev/sd<replaceable>X</replaceable>
- $ oe-run-native bmap-tools-native bmaptool copy <replaceable>build-directory</replaceable>/tmp/deploy/images/<replaceable>machine</replaceable>/<replaceable>image</replaceable>.wic /dev/sd<replaceable>X</replaceable>
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- </orderedlist>
- </para>
-
- <para>
- For help on the <filename>bmaptool</filename> command, use the
- following command:
- <literallayout class='monospaced'>
- $ bmaptool --help
- </literallayout>
- </para>
- </section>
-
- <section id='making-images-more-secure'>
- <title>Making Images More Secure</title>
-
- <para>
- Security is of increasing concern for embedded devices.
- Consider the issues and problems discussed in just this
- sampling of work found across the Internet:
- <itemizedlist>
- <listitem><para><emphasis>
- "<ulink url='https://www.schneier.com/blog/archives/2014/01/security_risks_9.html'>Security Risks of Embedded Systems</ulink>"</emphasis>
- by Bruce Schneier
- </para></listitem>
- <listitem><para><emphasis>
- "<ulink url='http://census2012.sourceforge.net/paper.html'>Internet Census 2012</ulink>"</emphasis>
- by Carna Botnet</para></listitem>
- <listitem><para><emphasis>
- "<ulink url='http://elinux.org/images/6/6f/Security-issues.pdf'>Security Issues for Embedded Devices</ulink>"</emphasis>
- by Jake Edge
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- When securing your image is of concern, there are steps, tools,
- and variables that you can consider to help you reach the
- security goals you need for your particular device.
- Not all situations are identical when it comes to making an
- image secure.
- Consequently, this section provides some guidance and suggestions
- for consideration when you want to make your image more secure.
- <note>
- Because the security requirements and risks are
- different for every type of device, this section cannot
- provide a complete reference on securing your custom OS.
- It is strongly recommended that you also consult other sources
- of information on embedded Linux system hardening and on
- security.
- </note>
- </para>
-
- <section id='general-considerations'>
- <title>General Considerations</title>
-
- <para>
- General considerations exist that help you create more
- secure images.
- You should consider the following suggestions to help
- make your device more secure:
- <itemizedlist>
- <listitem><para>
- Scan additional code you are adding to the system
- (e.g. application code) by using static analysis
- tools.
- Look for buffer overflows and other potential
- security problems.
- </para></listitem>
- <listitem><para>
- Pay particular attention to the security for
- any web-based administration interface.
- </para>
- <para>Web interfaces typically need to perform
- administrative functions and tend to need to run with
- elevated privileges.
- Thus, the consequences resulting from the interface's
- security becoming compromised can be serious.
- Look for common web vulnerabilities such as
- cross-site-scripting (XSS), unvalidated inputs,
- and so forth.</para>
- <para>As with system passwords, the default credentials
- for accessing a web-based interface should not be the
- same across all devices.
- This is particularly true if the interface is enabled
- by default as it can be assumed that many end-users
- will not change the credentials.
- </para></listitem>
- <listitem><para>
- Ensure you can update the software on the device to
- mitigate vulnerabilities discovered in the future.
- This consideration especially applies when your
- device is network-enabled.
- </para></listitem>
- <listitem><para>
- Ensure you remove or disable debugging functionality
- before producing the final image.
- For information on how to do this, see the
- "<link linkend='considerations-specific-to-the-openembedded-build-system'>Considerations Specific to the OpenEmbedded Build System</link>"
- section.
- </para></listitem>
- <listitem><para>
- Ensure you have no network services listening that
- are not needed.
- </para></listitem>
- <listitem><para>
- Remove any software from the image that is not needed.
- </para></listitem>
- <listitem><para>
- Enable hardware support for secure boot functionality
- when your device supports this functionality.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='security-flags'>
- <title>Security Flags</title>
-
- <para>
- The Yocto Project has security flags that you can enable that
- help make your build output more secure.
- The security flags are in the
- <filename>meta/conf/distro/include/security_flags.inc</filename>
- file in your
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- (e.g. <filename>poky</filename>).
- <note>
- Depending on the recipe, certain security flags are enabled
- and disabled by default.
- </note>
- </para>
-
- <para>
-<!--
- The GCC/LD flags in <filename>security_flags.inc</filename>
- enable more secure code generation.
- By including the <filename>security_flags.inc</filename>
- file, you enable flags to the compiler and linker that cause
- them to generate more secure code.
- <note>
- The GCC/LD flags are enabled by default in the
- <filename>poky-lsb</filename> distribution.
- </note>
--->
- Use the following line in your
- <filename>local.conf</filename> file or in your custom
- distribution configuration file to enable the security
- compiler and linker flags for your build:
- <literallayout class='monospaced'>
- require conf/distro/include/security_flags.inc
- </literallayout>
- </para>
- </section>
-
- <section id='considerations-specific-to-the-openembedded-build-system'>
- <title>Considerations Specific to the OpenEmbedded Build System</title>
-
- <para>
- You can take some steps that are specific to the
- OpenEmbedded build system to make your images more secure:
- <itemizedlist>
- <listitem><para>
- Ensure "debug-tweaks" is not one of your selected
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FEATURES'><filename>IMAGE_FEATURES</filename></ulink>.
- When creating a new project, the default is to provide you
- with an initial <filename>local.conf</filename> file that
- enables this feature using the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTRA_IMAGE_FEATURES'><filename>EXTRA_IMAGE_FEATURES</filename></ulink> variable with the line:
- <literallayout class='monospaced'>
- EXTRA_IMAGE_FEATURES = "debug-tweaks"
- </literallayout>
- To disable that feature, simply comment out that line in your
- <filename>local.conf</filename> file, or
- make sure <filename>IMAGE_FEATURES</filename> does not contain
- "debug-tweaks" before producing your final image.
- Among other things, leaving this in place sets the
- root password as blank, which makes logging in for
- debugging or inspection easy during
- development but also means anyone can easily log in
- during production.
- </para></listitem>
- <listitem><para>
- It is possible to set a root password for the image
- and also to set passwords for any extra users you might
- add (e.g. administrative or service type users).
- When you set up passwords for multiple images or
- users, you should not duplicate passwords.
- </para>
- <para>
- To set up passwords, use the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-extrausers'><filename>extrausers</filename></ulink>
- class, which is the preferred method.
- For an example on how to set up both root and user
- passwords, see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-extrausers'><filename>extrausers.bbclass</filename></ulink>"
- section.
- <note>
- When adding extra user accounts or setting a
- root password, be cautious about setting the
- same password on every device.
- If you do this, and the password you have set
- is exposed, then every device is now potentially
- compromised.
- If you need this access but want to ensure
- security, consider setting a different,
- random password for each device.
- Typically, you do this as a separate step after
- you deploy the image onto the device.
- </note>
- </para></listitem>
- <listitem><para>
- Consider enabling a Mandatory Access Control (MAC)
- framework such as SMACK or SELinux and tuning it
- appropriately for your device's usage.
- You can find more information in the
- <ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/meta-selinux/'><filename>meta-selinux</filename></ulink>
- layer.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- </para>
- </section>
-
- <section id='tools-for-hardening-your-image'>
- <title>Tools for Hardening Your Image</title>
-
- <para>
- The Yocto Project provides tools for making your image
- more secure.
- You can find these tools in the
- <filename>meta-security</filename> layer of the
- <ulink url='&YOCTO_GIT_URL;'>Yocto Project Source Repositories</ulink>.
- </para>
- </section>
- </section>
-
- <section id='creating-your-own-distribution'>
- <title>Creating Your Own Distribution</title>
-
- <para>
- When you build an image using the Yocto Project and
- do not alter any distribution
- <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>,
- you are creating a Poky distribution.
- If you wish to gain more control over package alternative
- selections, compile-time options, and other low-level
- configurations, you can create your own distribution.
- </para>
-
- <para>
- To create your own distribution, the basic steps consist of
- creating your own distribution layer, creating your own
- distribution configuration file, and then adding any needed
- code and Metadata to the layer.
- The following steps provide some more detail:
- <itemizedlist>
- <listitem><para><emphasis>Create a layer for your new distro:</emphasis>
- Create your distribution layer so that you can keep your
- Metadata and code for the distribution separate.
- It is strongly recommended that you create and use your own
- layer for configuration and code.
- Using your own layer as compared to just placing
- configurations in a <filename>local.conf</filename>
- configuration file makes it easier to reproduce the same
- build configuration when using multiple build machines.
- See the
- "<link linkend='creating-a-general-layer-using-the-bitbake-layers-script'>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</link>"
- section for information on how to quickly set up a layer.
- </para></listitem>
- <listitem><para><emphasis>Create the distribution configuration file:</emphasis>
- The distribution configuration file needs to be created in
- the <filename>conf/distro</filename> directory of your
- layer.
- You need to name it using your distribution name
- (e.g. <filename>mydistro.conf</filename>).
- <note>
- The
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO'><filename>DISTRO</filename></ulink>
- variable in your
- <filename>local.conf</filename> file determines the
- name of your distribution.
- </note></para>
- <para>You can split out parts of your configuration file
- into include files and then "require" them from within
- your distribution configuration file.
- Be sure to place the include files in the
- <filename>conf/distro/include</filename> directory of
- your layer.
- A common example usage of include files would be to
- separate out the selection of desired version and revisions
- for individual recipes.
-</para>
- <para>Your configuration file needs to set the following
- required variables:
- <literallayout class='monospaced'>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_NAME'><filename>DISTRO_NAME</filename></ulink>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_VERSION'><filename>DISTRO_VERSION</filename></ulink>
- </literallayout>
- These following variables are optional and you typically
- set them from the distribution configuration file:
- <literallayout class='monospaced'>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_FEATURES'><filename>DISTRO_FEATURES</filename></ulink>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_EXTRA_RDEPENDS'><filename>DISTRO_EXTRA_RDEPENDS</filename></ulink>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_EXTRA_RRECOMMENDS'><filename>DISTRO_EXTRA_RRECOMMENDS</filename></ulink>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TCLIBC'><filename>TCLIBC</filename></ulink>
- </literallayout>
- <tip>
- If you want to base your distribution configuration file
- on the very basic configuration from OE-Core, you
- can use
- <filename>conf/distro/defaultsetup.conf</filename> as
- a reference and just include variables that differ
- as compared to <filename>defaultsetup.conf</filename>.
- Alternatively, you can create a distribution
- configuration file from scratch using the
- <filename>defaultsetup.conf</filename> file
- or configuration files from other distributions
- such as Poky or Angstrom as references.
- </tip></para></listitem>
- <listitem><para><emphasis>Provide miscellaneous variables:</emphasis>
- Be sure to define any other variables for which you want to
- create a default or enforce as part of the distribution
- configuration.
- You can include nearly any variable from the
- <filename>local.conf</filename> file.
- The variables you use are not limited to the list in the
- previous bulleted item.</para></listitem>
- <listitem><para><emphasis>Point to Your distribution configuration file:</emphasis>
- In your <filename>local.conf</filename> file in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
- set your
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO'><filename>DISTRO</filename></ulink>
- variable to point to your distribution's configuration file.
- For example, if your distribution's configuration file is
- named <filename>mydistro.conf</filename>, then you point
- to it as follows:
- <literallayout class='monospaced'>
- DISTRO = "mydistro"
- </literallayout></para></listitem>
- <listitem><para><emphasis>Add more to the layer if necessary:</emphasis>
- Use your layer to hold other information needed for the
- distribution:
- <itemizedlist>
- <listitem><para>Add recipes for installing
- distro-specific configuration files that are not
- already installed by another recipe.
- If you have distro-specific configuration files
- that are included by an existing recipe, you should
- add an append file (<filename>.bbappend</filename>)
- for those.
- For general information and recommendations
- on how to add recipes to your layer, see the
- "<link linkend='creating-your-own-layer'>Creating Your Own Layer</link>"
- and
- "<link linkend='best-practices-to-follow-when-creating-layers'>Following Best Practices When Creating Layers</link>"
- sections.</para></listitem>
- <listitem><para>Add any image recipes that are specific
- to your distribution.</para></listitem>
- <listitem><para>Add a <filename>psplash</filename>
- append file for a branded splash screen.
- For information on append files, see the
- "<link linkend='using-bbappend-files'>Using .bbappend Files in Your Layer</link>"
- section.</para></listitem>
- <listitem><para>Add any other append files to make
- custom changes that are specific to individual
- recipes.</para></listitem>
- </itemizedlist></para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='creating-a-custom-template-configuration-directory'>
- <title>Creating a Custom Template Configuration Directory</title>
-
- <para>
- If you are producing your own customized version
- of the build system for use by other users, you might
- want to customize the message shown by the setup script or
- you might want to change the template configuration files (i.e.
- <filename>local.conf</filename> and
- <filename>bblayers.conf</filename>) that are created in
- a new build directory.
- </para>
-
- <para>
- The OpenEmbedded build system uses the environment variable
- <filename>TEMPLATECONF</filename> to locate the directory
- from which it gathers configuration information that ultimately
- ends up in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
- <filename>conf</filename> directory.
- By default, <filename>TEMPLATECONF</filename> is set as
- follows in the <filename>poky</filename> repository:
- <literallayout class='monospaced'>
- TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}
- </literallayout>
- This is the directory used by the build system to find templates
- from which to build some key configuration files.
- If you look at this directory, you will see the
- <filename>bblayers.conf.sample</filename>,
- <filename>local.conf.sample</filename>, and
- <filename>conf-notes.txt</filename> files.
- The build system uses these files to form the respective
- <filename>bblayers.conf</filename> file,
- <filename>local.conf</filename> file, and display the list of
- BitBake targets when running the setup script.
- </para>
-
- <para>
- To override these default configuration files with
- configurations you want used within every new
- Build Directory, simply set the
- <filename>TEMPLATECONF</filename> variable to your directory.
- The <filename>TEMPLATECONF</filename> variable is set in the
- <filename>.templateconf</filename> file, which is in the
- top-level
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- folder (e.g. <filename>poky</filename>).
- Edit the <filename>.templateconf</filename> so that it can locate
- your directory.
- </para>
-
- <para>
- Best practices dictate that you should keep your
- template configuration directory in your custom distribution layer.
- For example, suppose you have a layer named
- <filename>meta-mylayer</filename> located in your home directory
- and you want your template configuration directory named
- <filename>myconf</filename>.
- Changing the <filename>.templateconf</filename> as follows
- causes the OpenEmbedded build system to look in your directory
- and base its configuration files on the
- <filename>*.sample</filename> configuration files it finds.
- The final configuration files (i.e.
- <filename>local.conf</filename> and
- <filename>bblayers.conf</filename> ultimately still end up in
- your Build Directory, but they are based on your
- <filename>*.sample</filename> files.
- <literallayout class='monospaced'>
- TEMPLATECONF=${TEMPLATECONF:-meta-mylayer/myconf}
- </literallayout>
- </para>
-
- <para>
- Aside from the <filename>*.sample</filename> configuration files,
- the <filename>conf-notes.txt</filename> also resides in the
- default <filename>meta-poky/conf</filename> directory.
- The script that sets up the build environment
- (i.e.
- <ulink url="&YOCTO_DOCS_REF_URL;#structure-core-script"><filename>&OE_INIT_FILE;</filename></ulink>)
- uses this file to display BitBake targets as part of the script
- output.
- Customizing this <filename>conf-notes.txt</filename> file is a
- good way to make sure your list of custom targets appears
- as part of the script's output.
- </para>
-
- <para>
- Here is the default list of targets displayed as a result of
- running either of the setup scripts:
- <literallayout class='monospaced'>
- You can now run 'bitbake &lt;target&gt;'
-
- Common targets are:
- core-image-minimal
- core-image-sato
- meta-toolchain
- meta-ide-support
- </literallayout>
- </para>
-
- <para>
- Changing the listed common targets is as easy as editing your
- version of <filename>conf-notes.txt</filename> in your
- custom template configuration directory and making sure you
- have <filename>TEMPLATECONF</filename> set to your directory.
- </para>
- </section>
-
- <section id='dev-saving-memory-during-a-build'>
- <title>Conserving Disk Space During Builds</title>
-
- <para>
- To help conserve disk space during builds, you can add the
- following statement to your project's
- <filename>local.conf</filename> configuration file found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
- <literallayout class='monospaced'>
- INHERIT += "rm_work"
- </literallayout>
- Adding this statement deletes the work directory used for building
- a recipe once the recipe is built.
- For more information on "rm_work", see the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-rm-work'><filename>rm_work</filename></ulink>
- class in the Yocto Project Reference Manual.
- </para>
- </section>
-
- <section id='working-with-packages'>
- <title>Working with Packages</title>
-
- <para>
- This section describes a few tasks that involve packages:
- <itemizedlist>
- <listitem><para>
- <link linkend='excluding-packages-from-an-image'>Excluding packages from an image</link>
- </para></listitem>
- <listitem><para>
- <link linkend='incrementing-a-binary-package-version'>Incrementing a binary package version</link>
- </para></listitem>
- <listitem><para>
- <link linkend='handling-optional-module-packaging'>Handling optional module packaging</link>
- </para></listitem>
- <listitem><para>
- <link linkend='using-runtime-package-management'>Using runtime package management</link>
- </para></listitem>
- <listitem><para>
- <link linkend='generating-and-using-signed-packages'>Generating and using signed packages</link>
- </para></listitem>
- <listitem><para>
- <link linkend='testing-packages-with-ptest'>Setting up and running package test (ptest)</link>
- </para></listitem>
- <listitem><para>
- <link linkend='creating-node-package-manager-npm-packages'>Creating node package manager (NPM) packages</link>
- </para></listitem>
- </itemizedlist>
- </para>
-
- <section id='excluding-packages-from-an-image'>
- <title>Excluding Packages from an Image</title>
-
- <para>
- You might find it necessary to prevent specific packages
- from being installed into an image.
- If so, you can use several variables to direct the build
- system to essentially ignore installing recommended packages
- or to not install a package at all.
- </para>
-
- <para>
- The following list introduces variables you can use to
- prevent packages from being installed into your image.
- Each of these variables only works with IPK and RPM
- package types.
- Support for Debian packages does not exist.
- Also, you can use these variables from your
- <filename>local.conf</filename> file or attach them to a
- specific image recipe by using a recipe name override.
- For more detail on the variables, see the descriptions in the
- Yocto Project Reference Manual's glossary chapter.
- <itemizedlist>
- <listitem><para><ulink url='&YOCTO_DOCS_REF_URL;#var-BAD_RECOMMENDATIONS'><filename>BAD_RECOMMENDATIONS</filename></ulink>:
- Use this variable to specify "recommended-only"
- packages that you do not want installed.
- </para></listitem>
- <listitem><para><ulink url='&YOCTO_DOCS_REF_URL;#var-NO_RECOMMENDATIONS'><filename>NO_RECOMMENDATIONS</filename></ulink>:
- Use this variable to prevent all "recommended-only"
- packages from being installed.
- </para></listitem>
- <listitem><para><ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_EXCLUDE'><filename>PACKAGE_EXCLUDE</filename></ulink>:
- Use this variable to prevent specific packages from
- being installed regardless of whether they are
- "recommended-only" or not.
- You need to realize that the build process could
- fail with an error when you
- prevent the installation of a package whose presence
- is required by an installed package.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='incrementing-a-binary-package-version'>
- <title>Incrementing a Package Version</title>
-
- <para>
- This section provides some background on how binary package
- versioning is accomplished and presents some of the services,
- variables, and terminology involved.
- </para>
-
- <para>
- In order to understand binary package versioning, you need
- to consider the following:
- <itemizedlist>
- <listitem><para>
- Binary Package: The binary package that is eventually
- built and installed into an image.
- </para></listitem>
- <listitem><para>
- Binary Package Version: The binary package version
- is composed of two components - a version and a
- revision.
- <note>
- Technically, a third component, the "epoch" (i.e.
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PE'><filename>PE</filename></ulink>)
- is involved but this discussion for the most part
- ignores <filename>PE</filename>.
- </note>
- The version and revision are taken from the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>
- variables, respectively.
- </para></listitem>
- <listitem><para>
- <filename>PV</filename>: The recipe version.
- <filename>PV</filename> represents the version of the
- software being packaged.
- Do not confuse <filename>PV</filename> with the
- binary package version.
- </para></listitem>
- <listitem><para>
- <filename>PR</filename>: The recipe revision.
- </para></listitem>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCPV'><filename>SRCPV</filename></ulink>:
- The OpenEmbedded build system uses this string
- to help define the value of <filename>PV</filename>
- when the source code revision needs to be included
- in it.
- </para></listitem>
- <listitem><para>
- <ulink url='https://wiki.yoctoproject.org/wiki/PR_Service'>PR Service</ulink>:
- A network-based service that helps automate keeping
- package feeds compatible with existing package
- manager applications such as RPM, APT, and OPKG.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Whenever the binary package content changes, the binary package
- version must change.
- Changing the binary package version is accomplished by changing
- or "bumping" the <filename>PR</filename> and/or
- <filename>PV</filename> values.
- Increasing these values occurs one of two ways:
- <itemizedlist>
- <listitem><para>Automatically using a Package Revision
- Service (PR Service).
- </para></listitem>
- <listitem><para>Manually incrementing the
- <filename>PR</filename> and/or
- <filename>PV</filename> variables.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Given a primary challenge of any build system and its users
- is how to maintain a package feed that is compatible with
- existing package manager applications such as RPM, APT, and
- OPKG, using an automated system is much preferred over a
- manual system.
- In either system, the main requirement is that binary package
- version numbering increases in a linear fashion and that a
- number of version components exist that support that linear
- progression.
- For information on how to ensure package revisioning remains
- linear, see the
- "<link linkend='automatically-incrementing-a-binary-package-revision-number'>Automatically Incrementing a Binary Package Revision Number</link>"
- section.
- </para>
-
- <para>
- The following three sections provide related information on the
- PR Service, the manual method for "bumping"
- <filename>PR</filename> and/or <filename>PV</filename>, and
- on how to ensure binary package revisioning remains linear.
- </para>
-
- <section id='working-with-a-pr-service'>
- <title>Working With a PR Service</title>
-
- <para>
- As mentioned, attempting to maintain revision numbers in the
- <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>
- is error prone, inaccurate, and causes problems for people
- submitting recipes.
- Conversely, the PR Service automatically generates
- increasing numbers, particularly the revision field,
- which removes the human element.
- <note>
- For additional information on using a PR Service, you
- can see the
- <ulink url='&YOCTO_WIKI_URL;/wiki/PR_Service'>PR Service</ulink>
- wiki page.
- </note>
- </para>
-
- <para>
- The Yocto Project uses variables in order of
- decreasing priority to facilitate revision numbering (i.e.
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PE'><filename>PE</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>, and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>
- for epoch, version, and revision, respectively).
- The values are highly dependent on the policies and
- procedures of a given distribution and package feed.
- </para>
-
- <para>
- Because the OpenEmbedded build system uses
- "<ulink url='&YOCTO_DOCS_OM_URL;#overview-checksums'>signatures</ulink>",
- which are unique to a given build, the build system
- knows when to rebuild packages.
- All the inputs into a given task are represented by a
- signature, which can trigger a rebuild when different.
- Thus, the build system itself does not rely on the
- <filename>PR</filename>, <filename>PV</filename>, and
- <filename>PE</filename> numbers to trigger a rebuild.
- The signatures, however, can be used to generate
- these values.
- </para>
-
- <para>
- The PR Service works with both
- <filename>OEBasic</filename> and
- <filename>OEBasicHash</filename> generators.
- The value of <filename>PR</filename> bumps when the
- checksum changes and the different generator mechanisms
- change signatures under different circumstances.
- </para>
-
- <para>
- As implemented, the build system includes values from
- the PR Service into the <filename>PR</filename> field as
- an addition using the form "<filename>.x</filename>" so
- <filename>r0</filename> becomes <filename>r0.1</filename>,
- <filename>r0.2</filename> and so forth.
- This scheme allows existing <filename>PR</filename> values
- to be used for whatever reasons, which include manual
- <filename>PR</filename> bumps, should it be necessary.
- </para>
-
- <para>
- By default, the PR Service is not enabled or running.
- Thus, the packages generated are just "self consistent".
- The build system adds and removes packages and
- there are no guarantees about upgrade paths but images
- will be consistent and correct with the latest changes.
- </para>
-
- <para>
- The simplest form for a PR Service is for it to exist
- for a single host development system that builds the
- package feed (building system).
- For this scenario, you can enable a local PR Service by
- setting
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PRSERV_HOST'><filename>PRSERV_HOST</filename></ulink>
- in your <filename>local.conf</filename> file in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
- <literallayout class='monospaced'>
- PRSERV_HOST = "localhost:0"
- </literallayout>
- Once the service is started, packages will automatically
- get increasing <filename>PR</filename> values and
- BitBake takes care of starting and stopping the server.
- </para>
-
- <para>
- If you have a more complex setup where multiple host
- development systems work against a common, shared package
- feed, you have a single PR Service running and it is
- connected to each building system.
- For this scenario, you need to start the PR Service using
- the <filename>bitbake-prserv</filename> command:
- <literallayout class='monospaced'>
- bitbake-prserv --host <replaceable>ip</replaceable> --port <replaceable>port</replaceable> --start
- </literallayout>
- In addition to hand-starting the service, you need to
- update the <filename>local.conf</filename> file of each
- building system as described earlier so each system
- points to the server and port.
- </para>
-
- <para>
- It is also recommended you use build history, which adds
- some sanity checks to binary package versions, in
- conjunction with the server that is running the PR Service.
- To enable build history, add the following to each building
- system's <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- # It is recommended to activate "buildhistory" for testing the PR service
- INHERIT += "buildhistory"
- BUILDHISTORY_COMMIT = "1"
- </literallayout>
- For information on build history, see the
- "<link linkend='maintaining-build-output-quality'>Maintaining Build Output Quality</link>"
- section.
- </para>
-
- <note>
- <para>
- The OpenEmbedded build system does not maintain
- <filename>PR</filename> information as part of the
- shared state (sstate) packages.
- If you maintain an sstate feed, its expected that either
- all your building systems that contribute to the sstate
- feed use a shared PR Service, or you do not run a PR
- Service on any of your building systems.
- Having some systems use a PR Service while others do
- not leads to obvious problems.
- </para>
-
- <para>
- For more information on shared state, see the
- "<ulink url='&YOCTO_DOCS_OM_URL;#shared-state-cache'>Shared State Cache</ulink>"
- section in the Yocto Project Overview and Concepts
- Manual.
- </para>
- </note>
- </section>
-
- <section id='manually-bumping-pr'>
- <title>Manually Bumping PR</title>
-
- <para>
- The alternative to setting up a PR Service is to manually
- "bump" the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>
- variable.
- </para>
-
- <para>
- If a committed change results in changing the package
- output, then the value of the PR variable needs to be
- increased (or "bumped") as part of that commit.
- For new recipes you should add the <filename>PR</filename>
- variable and set its initial value equal to "r0", which is
- the default.
- Even though the default value is "r0", the practice of
- adding it to a new recipe makes it harder to forget to bump
- the variable when you make changes to the recipe in future.
- </para>
-
- <para>
- If you are sharing a common <filename>.inc</filename> file
- with multiple recipes, you can also use the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-INC_PR'>INC_PR</ulink></filename>
- variable to ensure that the recipes sharing the
- <filename>.inc</filename> file are rebuilt when the
- <filename>.inc</filename> file itself is changed.
- The <filename>.inc</filename> file must set
- <filename>INC_PR</filename> (initially to "r0"), and all
- recipes referring to it should set <filename>PR</filename>
- to "${INC_PR}.0" initially, incrementing the last number
- when the recipe is changed.
- If the <filename>.inc</filename> file is changed then its
- <filename>INC_PR</filename> should be incremented.
- </para>
-
- <para>
- When upgrading the version of a binary package, assuming the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PV'>PV</ulink></filename>
- changes, the <filename>PR</filename> variable should be
- reset to "r0" (or "${INC_PR}.0" if you are using
- <filename>INC_PR</filename>).
- </para>
-
- <para>
- Usually, version increases occur only to binary packages.
- However, if for some reason <filename>PV</filename> changes
- but does not increase, you can increase the
- <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PE'>PE</ulink></filename>
- variable (Package Epoch).
- The <filename>PE</filename> variable defaults to "0".
- </para>
-
- <para>
- Binary package version numbering strives to follow the
- <ulink url='http://www.debian.org/doc/debian-policy/ch-controlfields.html'>
- Debian Version Field Policy Guidelines</ulink>.
- These guidelines define how versions are compared and what
- "increasing" a version means.
- </para>
- </section>
-
- <section id='automatically-incrementing-a-binary-package-revision-number'>
- <title>Automatically Incrementing a Package Version Number</title>
-
- <para>
- When fetching a repository, BitBake uses the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCREV'><filename>SRCREV</filename></ulink>
- variable to determine the specific source code revision
- from which to build.
- You set the <filename>SRCREV</filename> variable to
- <ulink url='&YOCTO_DOCS_REF_URL;#var-AUTOREV'><filename>AUTOREV</filename></ulink>
- to cause the OpenEmbedded build system to automatically use the
- latest revision of the software:
- <literallayout class='monospaced'>
- SRCREV = "${AUTOREV}"
- </literallayout>
- </para>
-
- <para>
- Furthermore, you need to reference <filename>SRCPV</filename>
- in <filename>PV</filename> in order to automatically update
- the version whenever the revision of the source code
- changes.
- Here is an example:
- <literallayout class='monospaced'>
- PV = "1.0+git${SRCPV}"
- </literallayout>
- The OpenEmbedded build system substitutes
- <filename>SRCPV</filename> with the following:
- <literallayout class='monospaced'>
- AUTOINC+<replaceable>source_code_revision</replaceable>
- </literallayout>
- The build system replaces the <filename>AUTOINC</filename> with
- a number.
- The number used depends on the state of the PR Service:
- <itemizedlist>
- <listitem><para>
- If PR Service is enabled, the build system increments
- the number, which is similar to the behavior of
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>.
- This behavior results in linearly increasing package
- versions, which is desirable.
- Here is an example:
- <literallayout class='monospaced'>
- hello-world-git_0.0+git0+b6558dd387-r0.0_armv7a-neon.ipk
- hello-world-git_0.0+git1+dd2f5c3565-r0.0_armv7a-neon.ipk
- </literallayout>
- </para></listitem>
- <listitem><para>
- If PR Service is not enabled, the build system
- replaces the <filename>AUTOINC</filename>
- placeholder with zero (i.e. "0").
- This results in changing the package version since
- the source revision is included.
- However, package versions are not increased linearly.
- Here is an example:
- <literallayout class='monospaced'>
- hello-world-git_0.0+git0+b6558dd387-r0.0_armv7a-neon.ipk
- hello-world-git_0.0+git0+dd2f5c3565-r0.0_armv7a-neon.ipk
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- In summary, the OpenEmbedded build system does not track the
- history of binary package versions for this purpose.
- <filename>AUTOINC</filename>, in this case, is comparable to
- <filename>PR</filename>.
- If PR server is not enabled, <filename>AUTOINC</filename>
- in the package version is simply replaced by "0".
- If PR server is enabled, the build system keeps track of the
- package versions and bumps the number when the package
- revision changes.
- </para>
- </section>
- </section>
-
- <section id='handling-optional-module-packaging'>
- <title>Handling Optional Module Packaging</title>
-
- <para>
- Many pieces of software split functionality into optional
- modules (or plugins) and the plugins that are built
- might depend on configuration options.
- To avoid having to duplicate the logic that determines what
- modules are available in your recipe or to avoid having
- to package each module by hand, the OpenEmbedded build system
- provides functionality to handle module packaging dynamically.
- </para>
-
- <para>
- To handle optional module packaging, you need to do two things:
- <itemizedlist>
- <listitem><para>Ensure the module packaging is actually
- done.</para></listitem>
- <listitem><para>Ensure that any dependencies on optional
- modules from other recipes are satisfied by your recipe.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <section id='making-sure-the-packaging-is-done'>
- <title>Making Sure the Packaging is Done</title>
-
- <para>
- To ensure the module packaging actually gets done, you use
- the <filename>do_split_packages</filename> function within
- the <filename>populate_packages</filename> Python function
- in your recipe.
- The <filename>do_split_packages</filename> function
- searches for a pattern of files or directories under a
- specified path and creates a package for each one it finds
- by appending to the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGES'><filename>PACKAGES</filename></ulink>
- variable and setting the appropriate values for
- <filename>FILES_packagename</filename>,
- <filename>RDEPENDS_packagename</filename>,
- <filename>DESCRIPTION_packagename</filename>, and so forth.
- Here is an example from the <filename>lighttpd</filename>
- recipe:
- <literallayout class='monospaced'>
- python populate_packages_prepend () {
- lighttpd_libdir = d.expand('${libdir}')
- do_split_packages(d, lighttpd_libdir, '^mod_(.*)\.so$',
- 'lighttpd-module-%s', 'Lighttpd module for %s',
- extra_depends='')
- }
- </literallayout>
- The previous example specifies a number of things in the
- call to <filename>do_split_packages</filename>.
- <itemizedlist>
- <listitem><para>A directory within the files installed
- by your recipe through <filename>do_install</filename>
- in which to search.</para></listitem>
- <listitem><para>A regular expression used to match module
- files in that directory.
- In the example, note the parentheses () that mark
- the part of the expression from which the module
- name should be derived.</para></listitem>
- <listitem><para>A pattern to use for the package names.
- </para></listitem>
- <listitem><para>A description for each package.
- </para></listitem>
- <listitem><para>An empty string for
- <filename>extra_depends</filename>, which disables
- the default dependency on the main
- <filename>lighttpd</filename> package.
- Thus, if a file in <filename>${libdir}</filename>
- called <filename>mod_alias.so</filename> is found,
- a package called <filename>lighttpd-module-alias</filename>
- is created for it and the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DESCRIPTION'><filename>DESCRIPTION</filename></ulink>
- is set to "Lighttpd module for alias".</para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Often, packaging modules is as simple as the previous
- example.
- However, more advanced options exist that you can use
- within <filename>do_split_packages</filename> to modify its
- behavior.
- And, if you need to, you can add more logic by specifying
- a hook function that is called for each package.
- It is also perfectly acceptable to call
- <filename>do_split_packages</filename> multiple times if
- you have more than one set of modules to package.
- </para>
-
- <para>
- For more examples that show how to use
- <filename>do_split_packages</filename>, see the
- <filename>connman.inc</filename> file in the
- <filename>meta/recipes-connectivity/connman/</filename>
- directory of the <filename>poky</filename>
- <ulink url='&YOCTO_DOCS_OM_URL;#yocto-project-repositories'>source repository</ulink>.
- You can also find examples in
- <filename>meta/classes/kernel.bbclass</filename>.
- </para>
-
- <para>
- Following is a reference that shows
- <filename>do_split_packages</filename> mandatory and
- optional arguments:
- <literallayout class='monospaced'>
- Mandatory arguments
-
- root
- The path in which to search
- file_regex
- Regular expression to match searched files.
- Use parentheses () to mark the part of this
- expression that should be used to derive the
- module name (to be substituted where %s is
- used in other function arguments as noted below)
- output_pattern
- Pattern to use for the package names. Must
- include %s.
- description
- Description to set for each package. Must
- include %s.
-
- Optional arguments
-
- postinst
- Postinstall script to use for all packages
- (as a string)
- recursive
- True to perform a recursive search - default
- False
- hook
- A hook function to be called for every match.
- The function will be called with the following
- arguments (in the order listed):
-
- f
- Full path to the file/directory match
- pkg
- The package name
- file_regex
- As above
- output_pattern
- As above
- modulename
- The module name derived using file_regex
-
- extra_depends
- Extra runtime dependencies (RDEPENDS) to be
- set for all packages. The default value of None
- causes a dependency on the main package
- (${PN}) - if you do not want this, pass empty
- string '' for this parameter.
- aux_files_pattern
- Extra item(s) to be added to FILES for each
- package. Can be a single string item or a list
- of strings for multiple items. Must include %s.
- postrm
- postrm script to use for all packages (as a
- string)
- allow_dirs
- True to allow directories to be matched -
- default False
- prepend
- If True, prepend created packages to PACKAGES
- instead of the default False which appends them
- match_path
- match file_regex on the whole relative path to
- the root rather than just the file name
- aux_files_pattern_verbatim
- Extra item(s) to be added to FILES for each
- package, using the actual derived module name
- rather than converting it to something legal
- for a package name. Can be a single string item
- or a list of strings for multiple items. Must
- include %s.
- allow_links
- True to allow symlinks to be matched - default
- False
- summary
- Summary to set for each package. Must include %s;
- defaults to description if not set.
- </literallayout>
- </para>
- </section>
-
- <section id='satisfying-dependencies'>
- <title>Satisfying Dependencies</title>
-
- <para>
- The second part for handling optional module packaging
- is to ensure that any dependencies on optional modules
- from other recipes are satisfied by your recipe.
- You can be sure these dependencies are satisfied by
- using the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGES_DYNAMIC'><filename>PACKAGES_DYNAMIC</filename></ulink> variable.
- Here is an example that continues with the
- <filename>lighttpd</filename> recipe shown earlier:
- <literallayout class='monospaced'>
- PACKAGES_DYNAMIC = "lighttpd-module-.*"
- </literallayout>
- The name specified in the regular expression can of
- course be anything.
- In this example, it is <filename>lighttpd-module-</filename>
- and is specified as the prefix to ensure that any
- <ulink url='&YOCTO_DOCS_REF_URL;#var-RDEPENDS'><filename>RDEPENDS</filename></ulink>
- and <ulink url='&YOCTO_DOCS_REF_URL;#var-RRECOMMENDS'><filename>RRECOMMENDS</filename></ulink>
- on a package name starting with the prefix are satisfied
- during build time.
- If you are using <filename>do_split_packages</filename>
- as described in the previous section, the value you put in
- <filename>PACKAGES_DYNAMIC</filename> should correspond to
- the name pattern specified in the call to
- <filename>do_split_packages</filename>.
- </para>
- </section>
- </section>
-
- <section id='using-runtime-package-management'>
- <title>Using Runtime Package Management</title>
-
- <para>
- During a build, BitBake always transforms a recipe into one or
- more packages.
- For example, BitBake takes the <filename>bash</filename> recipe
- and produces a number of packages (e.g.
- <filename>bash</filename>, <filename>bash-bashbug</filename>,
- <filename>bash-completion</filename>,
- <filename>bash-completion-dbg</filename>,
- <filename>bash-completion-dev</filename>,
- <filename>bash-completion-extra</filename>,
- <filename>bash-dbg</filename>, and so forth).
- Not all generated packages are included in an image.
- </para>
-
- <para>
- In several situations, you might need to update, add, remove,
- or query the packages on a target device at runtime
- (i.e. without having to generate a new image).
- Examples of such situations include:
- <itemizedlist>
- <listitem><para>
- You want to provide in-the-field updates to deployed
- devices (e.g. security updates).
- </para></listitem>
- <listitem><para>
- You want to have a fast turn-around development cycle
- for one or more applications that run on your device.
- </para></listitem>
- <listitem><para>
- You want to temporarily install the "debug" packages
- of various applications on your device so that
- debugging can be greatly improved by allowing
- access to symbols and source debugging.
- </para></listitem>
- <listitem><para>
- You want to deploy a more minimal package selection of
- your device but allow in-the-field updates to add a
- larger selection for customization.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- In all these situations, you have something similar to a more
- traditional Linux distribution in that in-field devices
- are able to receive pre-compiled packages from a server for
- installation or update.
- Being able to install these packages on a running,
- in-field device is what is termed "runtime package
- management".
- </para>
-
- <para>
- In order to use runtime package management, you
- need a host or server machine that serves up the pre-compiled
- packages plus the required metadata.
- You also need package manipulation tools on the target.
- The build machine is a likely candidate to act as the server.
- However, that machine does not necessarily have to be the
- package server.
- The build machine could push its artifacts to another machine
- that acts as the server (e.g. Internet-facing).
- In fact, doing so is advantageous for a production
- environment as getting the packages away from the
- development system's build directory prevents accidental
- overwrites.
- </para>
-
- <para>
- A simple build that targets just one device produces
- more than one package database.
- In other words, the packages produced by a build are separated
- out into a couple of different package groupings based on
- criteria such as the target's CPU architecture, the target
- board, or the C library used on the target.
- For example, a build targeting the <filename>qemux86</filename>
- device produces the following three package databases:
- <filename>noarch</filename>, <filename>i586</filename>, and
- <filename>qemux86</filename>.
- If you wanted your <filename>qemux86</filename> device to be
- aware of all the packages that were available to it,
- you would need to point it to each of these databases
- individually.
- In a similar way, a traditional Linux distribution usually is
- configured to be aware of a number of software repositories
- from which it retrieves packages.
- </para>
-
- <para>
- Using runtime package management is completely optional and
- not required for a successful build or deployment in any
- way.
- But if you want to make use of runtime package management,
- you need to do a couple things above and beyond the basics.
- The remainder of this section describes what you need to do.
- </para>
-
- <section id='runtime-package-management-build'>
- <title>Build Considerations</title>
-
- <para>
- This section describes build considerations of which you
- need to be aware in order to provide support for runtime
- package management.
- </para>
-
- <para>
- When BitBake generates packages, it needs to know
- what format or formats to use.
- In your configuration, you use the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></ulink>
- variable to specify the format:
- <orderedlist>
- <listitem><para>
- Open the <filename>local.conf</filename> file
- inside your
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
- (e.g. <filename>~/poky/build/conf/local.conf</filename>).
- </para></listitem>
- <listitem><para>
- Select the desired package format as follows:
- <literallayout class='monospaced'>
- PACKAGE_CLASSES ?= “package_<replaceable>packageformat</replaceable>”
- </literallayout>
- where <replaceable>packageformat</replaceable>
- can be "ipk", "rpm", "deb", or "tar" which are the
- supported package formats.
- <note>
- Because the Yocto Project supports four
- different package formats, you can set the
- variable with more than one argument.
- However, the OpenEmbedded build system only
- uses the first argument when creating an image
- or Software Development Kit (SDK).
- </note>
- </para></listitem>
- </orderedlist>
- </para>
-
- <para>
- If you would like your image to start off with a basic
- package database containing the packages in your current
- build as well as to have the relevant tools available on the
- target for runtime package management, you can include
- "package-management" in the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FEATURES'><filename>IMAGE_FEATURES</filename></ulink>
- variable.
- Including "package-management" in this configuration
- variable ensures that when the image is assembled for your
- target, the image includes the currently-known package
- databases as well as the target-specific tools required
- for runtime package management to be performed on the
- target.
- However, this is not strictly necessary.
- You could start your image off without any databases
- but only include the required on-target package
- tool(s).
- As an example, you could include "opkg" in your
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'><filename>IMAGE_INSTALL</filename></ulink>
- variable if you are using the IPK package format.
- You can then initialize your target's package database(s)
- later once your image is up and running.
- </para>
-
- <para>
- Whenever you perform any sort of build step that can
- potentially generate a package or modify existing
- package, it is always a good idea to re-generate the
- package index after the build by using the following
- command:
- <literallayout class='monospaced'>
- $ bitbake package-index
- </literallayout>
- It might be tempting to build the package and the
- package index at the same time with a command such as
- the following:
- <literallayout class='monospaced'>
- $ bitbake <replaceable>some-package</replaceable> package-index
- </literallayout>
- Do not do this as BitBake does not schedule the package
- index for after the completion of the package you are
- building.
- Consequently, you cannot be sure of the package index
- including information for the package you just built.
- Thus, be sure to run the package update step separately
- after building any packages.
- </para>
-
- <para>
- You can use the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_ARCHS'><filename>PACKAGE_FEED_ARCHS</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_BASE_PATHS'><filename>PACKAGE_FEED_BASE_PATHS</filename></ulink>,
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_URIS'><filename>PACKAGE_FEED_URIS</filename></ulink>
- variables to pre-configure target images to use a package
- feed.
- If you do not define these variables, then manual steps
- as described in the subsequent sections are necessary to
- configure the target.
- You should set these variables before building the image
- in order to produce a correctly configured image.
- </para>
-
- <para>
- When your build is complete, your packages reside in the
- <filename>${TMPDIR}/deploy/<replaceable>packageformat</replaceable></filename>
- directory.
- For example, if
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink><filename>}</filename>
- is <filename>tmp</filename> and your selected package type
- is RPM, then your RPM packages are available in
- <filename>tmp/deploy/rpm</filename>.
- </para>
- </section>
-
- <section id='runtime-package-management-server'>
- <title>Host or Server Machine Setup</title>
-
- <para>
- Although other protocols are possible, a server using HTTP
- typically serves packages.
- If you want to use HTTP, then set up and configure a
- web server such as Apache 2, lighttpd, or
- SimpleHTTPServer on the machine serving the packages.
- </para>
-
- <para>
- To keep things simple, this section describes how to set
- up a SimpleHTTPServer web server to share package feeds
- from the developer's machine.
- Although this server might not be the best for a production
- environment, the setup is simple and straight forward.
- Should you want to use a different server more suited for
- production (e.g. Apache 2, Lighttpd, or Nginx), take the
- appropriate steps to do so.
- </para>
-
- <para>
- From within the build directory where you have built an
- image based on your packaging choice (i.e. the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></ulink>
- setting), simply start the server.
- The following example assumes a build directory of
- <filename>~/poky/build/tmp/deploy/rpm</filename> and a
- <filename>PACKAGE_CLASSES</filename> setting of
- "package_rpm":
- <literallayout class='monospaced'>
- $ cd ~/poky/build/tmp/deploy/rpm
- $ python -m SimpleHTTPServer
- </literallayout>
- </para>
- </section>
-
- <section id='runtime-package-management-target'>
- <title>Target Setup</title>
-
- <para>
- Setting up the target differs depending on the
- package management system.
- This section provides information for RPM, IPK, and DEB.
- </para>
-
- <section id='runtime-package-management-target-rpm'>
- <title>Using RPM</title>
-
- <para>
- The
- <ulink url='https://en.wikipedia.org/wiki/DNF_(software)'>Dandified Packaging Tool</ulink>
- (DNF) performs runtime package management of RPM
- packages.
- In order to use DNF for runtime package management,
- you must perform an initial setup on the target
- machine for cases where the
- <filename>PACKAGE_FEED_*</filename> variables were not
- set as part of the image that is running on the
- target.
- This means if you built your image and did not not use
- these variables as part of the build and your image is
- now running on the target, you need to perform the
- steps in this section if you want to use runtime
- package management.
- <note>
- For information on the
- <filename>PACKAGE_FEED_*</filename> variables, see
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_ARCHS'><filename>PACKAGE_FEED_ARCHS</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_BASE_PATHS'><filename>PACKAGE_FEED_BASE_PATHS</filename></ulink>,
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_URIS'><filename>PACKAGE_FEED_URIS</filename></ulink>
- in the Yocto Project Reference Manual variables
- glossary.
- </note>
- </para>
-
- <para>
- On the target, you must inform DNF that package
- databases are available.
- You do this by creating a file named
- <filename>/etc/yum.repos.d/oe-packages.repo</filename>
- and defining the <filename>oe-packages</filename>.
- </para>
-
- <para>
- As an example, assume the target is able to use the
- following package databases:
- <filename>all</filename>, <filename>i586</filename>,
- and <filename>qemux86</filename> from a server named
- <filename>my.server</filename>.
- The specifics for setting up the web server are up to
- you.
- The critical requirement is that the URIs in the
- target repository configuration point to the
- correct remote location for the feeds.
- <note><title>Tip</title>
- For development purposes, you can point the web
- server to the build system's
- <filename>deploy</filename> directory.
- However, for production use, it is better to copy
- the package directories to a location outside of
- the build area and use that location.
- Doing so avoids situations where the build system
- overwrites or changes the
- <filename>deploy</filename> directory.
- </note>
- </para>
-
- <para>
- When telling DNF where to look for the package
- databases, you must declare individual locations
- per architecture or a single location used for all
- architectures.
- You cannot do both:
- <itemizedlist>
- <listitem><para>
- <emphasis>Create an Explicit List of Architectures:</emphasis>
- Define individual base URLs to identify where
- each package database is located:
- <literallayout class='monospaced'>
- [oe-packages]
- baseurl=http://my.server/rpm/i586 http://my.server/rpm/qemux86 http://my.server/rpm/all
- </literallayout>
- This example informs DNF about individual
- package databases for all three architectures.
- </para></listitem>
- <listitem><para>
- <emphasis>Create a Single (Full) Package Index:</emphasis>
- Define a single base URL that identifies where
- a full package database is located:
- <literallayout class='monospaced'>
- [oe-packages]
- baseurl=http://my.server/rpm
- </literallayout>
- This example informs DNF about a single package
- database that contains all the package index
- information for all supported architectures.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Once you have informed DNF where to find the package
- databases, you need to fetch them:
- <literallayout class='monospaced'>
- # dnf makecache
- </literallayout>
- DNF is now able to find, install, and upgrade packages
- from the specified repository or repositories.
- <note>
- See the
- <ulink url='http://dnf.readthedocs.io/en/latest/'>DNF documentation</ulink>
- for additional information.
- </note>
- </para>
- </section>
-
- <section id='runtime-package-management-target-ipk'>
- <title>Using IPK</title>
-
- <para>
- The <filename>opkg</filename> application performs
- runtime package management of IPK packages.
- You must perform an initial setup for
- <filename>opkg</filename> on the target machine
- if the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_ARCHS'><filename>PACKAGE_FEED_ARCHS</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_BASE_PATHS'><filename>PACKAGE_FEED_BASE_PATHS</filename></ulink>, and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_URIS'><filename>PACKAGE_FEED_URIS</filename></ulink>
- variables have not been set or the target image was
- built before the variables were set.
- </para>
-
- <para>
- The <filename>opkg</filename> application uses
- configuration files to find available package
- databases.
- Thus, you need to create a configuration file inside
- the <filename>/etc/opkg/</filename> direction, which
- informs <filename>opkg</filename> of any repository
- you want to use.
- </para>
-
- <para>
- As an example, suppose you are serving packages from a
- <filename>ipk/</filename> directory containing the
- <filename>i586</filename>,
- <filename>all</filename>, and
- <filename>qemux86</filename> databases through an
- HTTP server named <filename>my.server</filename>.
- On the target, create a configuration file
- (e.g. <filename>my_repo.conf</filename>) inside the
- <filename>/etc/opkg/</filename> directory containing
- the following:
- <literallayout class='monospaced'>
- src/gz all http://my.server/ipk/all
- src/gz i586 http://my.server/ipk/i586
- src/gz qemux86 http://my.server/ipk/qemux86
- </literallayout>
- Next, instruct <filename>opkg</filename> to fetch
- the repository information:
- <literallayout class='monospaced'>
- # opkg update
- </literallayout>
- The <filename>opkg</filename> application is now able
- to find, install, and upgrade packages from the
- specified repository.
- </para>
- </section>
-
- <section id='runtime-package-management-target-deb'>
- <title>Using DEB</title>
-
- <para>
- The <filename>apt</filename> application performs
- runtime package management of DEB packages.
- This application uses a source list file to find
- available package databases.
- You must perform an initial setup for
- <filename>apt</filename> on the target machine
- if the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_ARCHS'><filename>PACKAGE_FEED_ARCHS</filename></ulink>,
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_BASE_PATHS'><filename>PACKAGE_FEED_BASE_PATHS</filename></ulink>, and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_FEED_URIS'><filename>PACKAGE_FEED_URIS</filename></ulink>
- variables have not been set or the target image was
- built before the variables were set.
- </para>
-
- <para>
- To inform <filename>apt</filename> of the repository
- you want to use, you might create a list file (e.g.
- <filename>my_repo.list</filename>) inside the
- <filename>/etc/apt/sources.list.d/</filename>
- directory.
- As an example, suppose you are serving packages from a
- <filename>deb/</filename> directory containing the
- <filename>i586</filename>,
- <filename>all</filename>, and
- <filename>qemux86</filename> databases through an
- HTTP server named <filename>my.server</filename>.
- The list file should contain:
- <literallayout class='monospaced'>
- deb http://my.server/deb/all ./
- deb http://my.server/deb/i586 ./
- deb http://my.server/deb/qemux86 ./
- </literallayout>
- Next, instruct the <filename>apt</filename>
- application to fetch the repository information:
- <literallayout class='monospaced'>
- # apt-get update
- </literallayout>
- After this step, <filename>apt</filename> is able
- to find, install, and upgrade packages from the
- specified repository.
- </para>
- </section>
- </section>
- </section>
-
- <section id='generating-and-using-signed-packages'>
- <title>Generating and Using Signed Packages</title>
- <para>
- In order to add security to RPM packages used during a build,
- you can take steps to securely sign them.
- Once a signature is verified, the OpenEmbedded build system
- can use the package in the build.
- If security fails for a signed package, the build system
- aborts the build.
- </para>
-
- <para>
- This section describes how to sign RPM packages during a build
- and how to use signed package feeds (repositories) when
- doing a build.
- </para>
-
- <section id='signing-rpm-packages'>
- <title>Signing RPM Packages</title>
-
- <para>
- To enable signing RPM packages, you must set up the
- following configurations in either your
- <filename>local.config</filename> or
- <filename>distro.config</filename> file:
- <literallayout class='monospaced'>
- # Inherit sign_rpm.bbclass to enable signing functionality
- INHERIT += " sign_rpm"
- # Define the GPG key that will be used for signing.
- RPM_GPG_NAME = "<replaceable>key_name</replaceable>"
- # Provide passphrase for the key
- RPM_GPG_PASSPHRASE = "<replaceable>passphrase</replaceable>"
- </literallayout>
- <note>
- Be sure to supply appropriate values for both
- <replaceable>key_name</replaceable> and
- <replaceable>passphrase</replaceable>
- </note>
- Aside from the
- <filename>RPM_GPG_NAME</filename> and
- <filename>RPM_GPG_PASSPHRASE</filename> variables in the
- previous example, two optional variables related to signing
- exist:
- <itemizedlist>
- <listitem><para>
- <emphasis><filename>GPG_BIN</filename>:</emphasis>
- Specifies a <filename>gpg</filename> binary/wrapper
- that is executed when the package is signed.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>GPG_PATH</filename>:</emphasis>
- Specifies the <filename>gpg</filename> home
- directory used when the package is signed.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='processing-package-feeds'>
- <title>Processing Package Feeds</title>
-
- <para>
- In addition to being able to sign RPM packages, you can
- also enable signed package feeds for IPK and RPM packages.
- </para>
-
- <para>
- The steps you need to take to enable signed package feed
- use are similar to the steps used to sign RPM packages.
- You must define the following in your
- <filename>local.config</filename> or
- <filename>distro.config</filename> file:
- <literallayout class='monospaced'>
- INHERIT += "sign_package_feed"
- PACKAGE_FEED_GPG_NAME = "<replaceable>key_name</replaceable>"
- PACKAGE_FEED_GPG_PASSPHRASE_FILE = "<replaceable>path_to_file_containing_passphrase</replaceable>"
- </literallayout>
- For signed package feeds, the passphrase must exist in a
- separate file, which is pointed to by the
- <filename>PACKAGE_FEED_GPG_PASSPHRASE_FILE</filename>
- variable.
- Regarding security, keeping a plain text passphrase out of
- the configuration is more secure.
- </para>
-
- <para>
- Aside from the
- <filename>PACKAGE_FEED_GPG_NAME</filename> and
- <filename>PACKAGE_FEED_GPG_PASSPHRASE_FILE</filename>
- variables, three optional variables related to signed
- package feeds exist:
- <itemizedlist>
- <listitem><para>
- <emphasis><filename>GPG_BIN</filename>:</emphasis>
- Specifies a <filename>gpg</filename> binary/wrapper
- that is executed when the package is signed.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>GPG_PATH</filename>:</emphasis>
- Specifies the <filename>gpg</filename> home
- directory used when the package is signed.
- </para></listitem>
- <listitem><para>
- <emphasis><filename>PACKAGE_FEED_GPG_SIGNATURE_TYPE</filename>:</emphasis>
- Specifies the type of <filename>gpg</filename>
- signature.
- This variable applies only to RPM and IPK package
- feeds.
- Allowable values for the
- <filename>PACKAGE_FEED_GPG_SIGNATURE_TYPE</filename>
- are "ASC", which is the default and specifies ascii
- armored, and "BIN", which specifies binary.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
- </section>
-
- <section id='testing-packages-with-ptest'>
- <title>Testing Packages With ptest</title>
-
- <para>
- A Package Test (ptest) runs tests against packages built
- by the OpenEmbedded build system on the target machine.
- A ptest contains at least two items: the actual test, and
- a shell script (<filename>run-ptest</filename>) that starts
- the test.
- The shell script that starts the test must not contain
- the actual test - the script only starts the test.
- On the other hand, the test can be anything from a simple
- shell script that runs a binary and checks the output to
- an elaborate system of test binaries and data files.
- </para>
-
- <para>
- The test generates output in the format used by
- Automake:
- <literallayout class='monospaced'>
- <replaceable>result</replaceable>: <replaceable>testname</replaceable>
- </literallayout>
- where the result can be <filename>PASS</filename>,
- <filename>FAIL</filename>, or <filename>SKIP</filename>,
- and the testname can be any identifying string.
- </para>
-
- <para>
- For a list of Yocto Project recipes that are already
- enabled with ptest, see the
- <ulink url='https://wiki.yoctoproject.org/wiki/Ptest'>Ptest</ulink>
- wiki page.
- <note>
- A recipe is "ptest-enabled" if it inherits the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-ptest'><filename>ptest</filename></ulink>
- class.
- </note>
- </para>
-
- <section id='adding-ptest-to-your-build'>
- <title>Adding ptest to Your Build</title>
-
- <para>
- To add package testing to your build, add the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_FEATURES'><filename>DISTRO_FEATURES</filename></ulink>
- and <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTRA_IMAGE_FEATURES'><filename>EXTRA_IMAGE_FEATURES</filename></ulink>
- variables to your <filename>local.conf</filename> file,
- which is found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
- <literallayout class='monospaced'>
- DISTRO_FEATURES_append = " ptest"
- EXTRA_IMAGE_FEATURES += "ptest-pkgs"
- </literallayout>
- Once your build is complete, the ptest files are installed
- into the
- <filename>/usr/lib/<replaceable>package</replaceable>/ptest</filename>
- directory within the image, where
- <filename><replaceable>package</replaceable></filename>
- is the name of the package.
- </para>
- </section>
-
- <section id='running-ptest'>
- <title>Running ptest</title>
-
- <para>
- The <filename>ptest-runner</filename> package installs a
- shell script that loops through all installed ptest test
- suites and runs them in sequence.
- Consequently, you might want to add this package to
- your image.
- </para>
- </section>
-
- <section id='getting-your-package-ready'>
- <title>Getting Your Package Ready</title>
-
- <para>
- In order to enable a recipe to run installed ptests
- on target hardware,
- you need to prepare the recipes that build the packages
- you want to test.
- Here is what you have to do for each recipe:
- <itemizedlist>
- <listitem><para><emphasis>Be sure the recipe
- inherits the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-ptest'><filename>ptest</filename></ulink>
- class:</emphasis>
- Include the following line in each recipe:
- <literallayout class='monospaced'>
- inherit ptest
- </literallayout>
- </para></listitem>
- <listitem><para><emphasis>Create <filename>run-ptest</filename>:</emphasis>
- This script starts your test.
- Locate the script where you will refer to it
- using
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>.
- Here is an example that starts a test for
- <filename>dbus</filename>:
- <literallayout class='monospaced'>
- #!/bin/sh
- cd test
- make -k runtest-TESTS
- </literallayout>
- </para></listitem>
- <listitem><para><emphasis>Ensure dependencies are
- met:</emphasis>
- If the test adds build or runtime dependencies
- that normally do not exist for the package
- (such as requiring "make" to run the test suite),
- use the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-RDEPENDS'><filename>RDEPENDS</filename></ulink>
- variables in your recipe in order for the package
- to meet the dependencies.
- Here is an example where the package has a runtime
- dependency on "make":
- <literallayout class='monospaced'>
- RDEPENDS_${PN}-ptest += "make"
- </literallayout>
- </para></listitem>
- <listitem><para><emphasis>Add a function to build the
- test suite:</emphasis>
- Not many packages support cross-compilation of
- their test suites.
- Consequently, you usually need to add a
- cross-compilation function to the package.
- </para>
-
- <para>Many packages based on Automake compile and
- run the test suite by using a single command
- such as <filename>make check</filename>.
- However, the host <filename>make check</filename>
- builds and runs on the same computer, while
- cross-compiling requires that the package is built
- on the host but executed for the target
- architecture (though often, as in the case for
- ptest, the execution occurs on the host).
- The built version of Automake that ships with the
- Yocto Project includes a patch that separates
- building and execution.
- Consequently, packages that use the unaltered,
- patched version of <filename>make check</filename>
- automatically cross-compiles.</para>
- <para>Regardless, you still must add a
- <filename>do_compile_ptest</filename> function to
- build the test suite.
- Add a function similar to the following to your
- recipe:
- <literallayout class='monospaced'>
- do_compile_ptest() {
- oe_runmake buildtest-TESTS
- }
- </literallayout>
- </para></listitem>
- <listitem><para><emphasis>Ensure special configurations
- are set:</emphasis>
- If the package requires special configurations
- prior to compiling the test code, you must
- insert a <filename>do_configure_ptest</filename>
- function into the recipe.
- </para></listitem>
- <listitem><para><emphasis>Install the test
- suite:</emphasis>
- The <filename>ptest</filename> class
- automatically copies the file
- <filename>run-ptest</filename> to the target and
- then runs make <filename>install-ptest</filename>
- to run the tests.
- If this is not enough, you need to create a
- <filename>do_install_ptest</filename> function and
- make sure it gets called after the
- "make install-ptest" completes.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
- </section>
-
- <section id='creating-node-package-manager-npm-packages'>
- <title>Creating Node Package Manager (NPM) Packages</title>
-
- <para>
- <ulink url='https://en.wikipedia.org/wiki/Npm_(software)'>NPM</ulink>
- is a package manager for the JavaScript programming
- language.
- The Yocto Project supports the NPM
- <ulink url='&YOCTO_DOCS_BB_URL;#bb-fetchers'>fetcher</ulink>.
- You can use this fetcher in combination with
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-devtool-reference'><filename>devtool</filename></ulink>
- to create recipes that produce NPM packages.
- </para>
-
- <para>
- Two workflows exist that allow you to create NPM packages
- using <filename>devtool</filename>: the NPM registry modules
- method and the NPM project code method.
- <note>
- While it is possible to create NPM recipes manually,
- using <filename>devtool</filename> is far simpler.
- </note>
- Additionally, some requirements and caveats exist.
- </para>
-
- <section id='npm-package-creation-requirements'>
- <title>Requirements and Caveats</title>
-
- <para>
- You need to be aware of the following before using
- <filename>devtool</filename> to create NPM packages:
- <itemizedlist>
- <listitem><para>
- Of the two methods that you can use
- <filename>devtool</filename> to create NPM
- packages, the registry approach is slightly
- simpler.
- However, you might consider the project
- approach because you do not have to publish
- your module in the NPM registry
- (<ulink url='https://docs.npmjs.com/misc/registry'><filename>npm-registry</filename></ulink>),
- which is NPM's public registry.
- </para></listitem>
- <listitem><para>
- Be familiar with
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-devtool-reference'><filename>devtool</filename></ulink>.
- </para></listitem>
- <listitem><para>
- The NPM host tools need the native
- <filename>nodejs-npm</filename> package, which
- is part of the OpenEmbedded environment.
- You need to get the package by cloning the
- <ulink url='https://github.com/openembedded/meta-openembedded'></ulink>
- repository out of GitHub.
- Be sure to add the path to your local copy to
- your <filename>bblayers.conf</filename> file.
- </para></listitem>
- <listitem><para>
- <filename>devtool</filename> cannot detect
- native libraries in module dependencies.
- Consequently, you must manually add packages
- to your recipe.
- </para></listitem>
- <listitem><para>
- While deploying NPM packages,
- <filename>devtool</filename> cannot determine
- which dependent packages are missing on the
- target (e.g. the node runtime
- <filename>nodejs</filename>).
- Consequently, you need to find out what
- files are missing and be sure they are on the
- target.
- </para></listitem>
- <listitem><para>
- Although you might not need NPM to run your
- node package, it is useful to have NPM on your
- target.
- The NPM package name is
- <filename>nodejs-npm</filename>.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='npm-using-the-registry-modules-method'>
- <title>Using the Registry Modules Method</title>
-
- <para>
- This section presents an example that uses the
- <filename>cute-files</filename> module, which is a
- file browser web application.
- <note>
- You must know the <filename>cute-files</filename>
- module version.
- </note>
- </para>
-
- <para>
- The first thing you need to do is use
- <filename>devtool</filename> and the NPM fetcher to
- create the recipe:
- <literallayout class='monospaced'>
- $ devtool add "npm://registry.npmjs.org;name=cute-files;version=1.0.2"
- </literallayout>
- The <filename>devtool add</filename> command runs
- <filename>recipetool create</filename> and uses the
- same fetch URI to download each dependency and capture
- license details where possible.
- The result is a generated recipe.
- </para>
-
- <para>
- The recipe file is fairly simple and contains every
- license that <filename>recipetool</filename> finds
- and includes the licenses in the recipe's
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LIC_FILES_CHKSUM'><filename>LIC_FILES_CHKSUM</filename></ulink>
- variables.
- You need to examine the variables and look for those
- with "unknown" in the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LICENSE'><filename>LICENSE</filename></ulink>
- field.
- You need to track down the license information for
- "unknown" modules and manually add the information to the
- recipe.
- </para>
-
- <para>
- <filename>recipetool</filename> creates "shrinkwrap" and
- "lockdown" files for your recipe.
- Shrinkwrap files capture the version of all dependent
- modules.
- Many packages do not provide shrinkwrap files.
- <filename>recipetool</filename> create a shrinkwrap
- file as it runs.
- You can replace the shrinkwrap file with your own file
- by setting the <filename>NPM_SHRINKWRAP</filename>
- variable.
- </para>
-
- <para>
- Lockdown files contain the checksum for each module
- to determine if your users download the same files when
- building with a recipe.
- Lockdown files ensure that dependencies have not been
- changed and that your NPM registry is still providing
- the same file.
- <note>
- A package is created for each sub-module.
- This policy is the only practical way to have the
- licenses for all of the dependencies represented
- in the license manifest of the image.
- </note>
- </para>
-
- <para>
- The <filename>devtool edit-recipe</filename> command
- lets you take a look at the recipe:
- <literallayout class='monospaced'>
- $ devtool edit-recipe cute-files
- SUMMARY = "Turn any folder on your computer into a cute file browser, available on the local network."
- LICENSE = "BSD-3-Clause &amp; Unknown &amp; MIT &amp; ISC"
- LIC_FILES_CHKSUM = "file://LICENSE;md5=71d98c0a1db42956787b1909c74a86ca \
- file://node_modules/content-disposition/LICENSE;md5=c6e0ce1e688c5ff16db06b7259e9cd20 \
- file://node_modules/express/LICENSE;md5=5513c00a5c36cd361da863dd9aa8875d \
- ...
-
- SRC_URI = "npm://registry.npmjs.org;name=cute-files;version=${PV}"
- NPM_SHRINKWRAP := "${THISDIR}/${PN}/npm-shrinkwrap.json"
- NPM_LOCKDOWN := "${THISDIR}/${PN}/lockdown.json"
- inherit npm
- # Must be set after inherit npm since that itself sets S
- S = "${WORKDIR}/npmpkg"
-
- LICENSE_${PN}-content-disposition = "MIT"
- ...
- LICENSE_${PN}-express = "MIT"
- LICENSE_${PN} = "MIT"
- </literallayout>
- Three key points exist in the previous example:
- <itemizedlist>
- <listitem><para>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
- uses the NPM scheme so that the NPM fetcher
- is used.
- </para></listitem>
- <listitem><para>
- <filename>recipetool</filename> collects all
- the license information.
- If a sub-module's license is unavailable,
- the sub-module's name appears in the comments.
- </para></listitem>
- <listitem><para>
- The <filename>inherit npm</filename> statement
- causes the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-npm'><filename>npm</filename></ulink>
- class to package up all the modules.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- You can run the following command to build the
- <filename>cute-files</filename> package:
- <literallayout class='monospaced'>
- $ devtool build cute-files
- </literallayout>
- Remember that <filename>nodejs</filename> must be
- installed on the target before your package.
- </para>
-
- <para>
- Assuming 192.168.7.2 for the target's IP address, use
- the following command to deploy your package:
- <literallayout class='monospaced'>
- $ devtool deploy-target -s cute-files root@192.168.7.2
- </literallayout>
- Once the package is installed on the target, you can
- test the application:
- <note>
- Because of a know issue, you cannot simply run
- <filename>cute-files</filename> as you would if you
- had run <filename>npm install</filename>.
- </note>
- <literallayout class='monospaced'>
- $ cd /usr/lib/node_modules/cute-files
- $ node cute-files.js
- </literallayout>
- On a browser, go to
- <filename>http://192.168.7.2:3000</filename> and you
- see the following:
- <imagedata fileref="figures/cute-files-npm-example.png" align="center" width="6in" depth="4in" />
- </para>
-
- <para>
- You can find the recipe in
- <filename>workspace/recipes/cute-files</filename>.
- You can use the recipe in any layer you choose.
- </para>
- </section>
-
- <section id='npm-using-the-npm-projects-method'>
- <title>Using the NPM Projects Code Method</title>
-
- <para>
- Although it is useful to package modules already in the
- NPM registry, adding <filename>node.js</filename> projects
- under development is a more common developer use case.
- </para>
-
- <para>
- This section covers the NPM projects code method, which is
- very similar to the "registry" approach described in the
- previous section.
- In the NPM projects method, you provide
- <filename>devtool</filename> with an URL that points to the
- source files.
- </para>
-
- <para>
- Replicating the same example, (i.e.
- <filename>cute-files</filename>) use the following command:
- <literallayout class='monospaced'>
- $ devtool add https://github.com/martinaglv/cute-files.git
- </literallayout>
- The recipe this command generates is very similar to the
- recipe created in the previous section.
- However, the <filename>SRC_URI</filename> looks like the
- following:
- <literallayout class='monospaced'>
- SRC_URI = "git://github.com/martinaglv/cute-files.git;protocol=https \
- npm://registry.npmjs.org;name=commander;version=2.9.0;subdir=node_modules/commander \
- npm://registry.npmjs.org;name=express;version=4.14.0;subdir=node_modules/express \
- npm://registry.npmjs.org;name=content-disposition;version=0.3.0;subdir=node_modules/content-disposition \
- "
- </literallayout>
- In this example, the main module is taken from the Git
- repository and dependents are taken from the NPM registry.
- Other than those differences, the recipe is basically the
- same between the two methods.
- You can build and deploy the package exactly as described
- in the previous section that uses the registry modules
- method.
- </para>
- </section>
- </section>
- </section>
-
- <section id='efficiently-fetching-source-files-during-a-build'>
- <title>Efficiently Fetching Source Files During a Build</title>
-
- <para>
- The OpenEmbedded build system works with source files located
- through the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
- variable.
- When you build something using BitBake, a big part of the operation
- is locating and downloading all the source tarballs.
- For images, downloading all the source for various packages can
- take a significant amount of time.
- </para>
-
- <para>
- This section shows you how you can use mirrors to speed up
- fetching source files and how you can pre-fetch files all of which
- leads to more efficient use of resources and time.
- </para>
-
- <section id='setting-up-effective-mirrors'>
- <title>Setting up Effective Mirrors</title>
-
- <para>
- A good deal that goes into a Yocto Project
- build is simply downloading all of the source tarballs.
- Maybe you have been working with another build system
- (OpenEmbedded or Angstrom) for which you have built up a
- sizable directory of source tarballs.
- Or, perhaps someone else has such a directory for which you
- have read access.
- If so, you can save time by adding statements to your
- configuration file so that the build process checks local
- directories first for existing tarballs before checking the
- Internet.
- </para>
-
- <para>
- Here is an efficient way to set it up in your
- <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- SOURCE_MIRROR_URL ?= "file:///home/you/your-download-dir/"
- INHERIT += "own-mirrors"
- BB_GENERATE_MIRROR_TARBALLS = "1"
- # BB_NO_NETWORK = "1"
- </literallayout>
- </para>
-
- <para>
- In the previous example, the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BB_GENERATE_MIRROR_TARBALLS'><filename>BB_GENERATE_MIRROR_TARBALLS</filename></ulink>
- variable causes the OpenEmbedded build system to generate
- tarballs of the Git repositories and store them in the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DL_DIR'><filename>DL_DIR</filename></ulink>
- directory.
- Due to performance reasons, generating and storing these
- tarballs is not the build system's default behavior.
- </para>
-
- <para>
- You can also use the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PREMIRRORS'><filename>PREMIRRORS</filename></ulink>
- variable.
- For an example, see the variable's glossary entry in the
- Yocto Project Reference Manual.
- </para>
- </section>
-
- <section id='getting-source-files-and-suppressing-the-build'>
- <title>Getting Source Files and Suppressing the Build</title>
-
- <para>
- Another technique you can use to ready yourself for a
- successive string of build operations, is to pre-fetch
- all the source files without actually starting a build.
- This technique lets you work through any download issues
- and ultimately gathers all the source files into your
- download directory
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-build-downloads'><filename>build/downloads</filename></ulink>,
- which is located with
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DL_DIR'><filename>DL_DIR</filename></ulink>.
- </para>
-
- <para>
- Use the following BitBake command form to fetch all the
- necessary sources without starting the build:
- <literallayout class='monospaced'>
- $ bitbake <replaceable>target</replaceable> --runall=fetch
- </literallayout>
- This variation of the BitBake command guarantees that you
- have all the sources for that BitBake target should you
- disconnect from the Internet and want to do the build
- later offline.
- </para>
- </section>
- </section>
-
- <section id="selecting-an-initialization-manager">
- <title>Selecting an Initialization Manager</title>
-
- <para>
- By default, the Yocto Project uses SysVinit as the initialization
- manager.
- However, support also exists for systemd,
- which is a full replacement for init with
- parallel starting of services, reduced shell overhead and other
- features that are used by many distributions.
- </para>
-
- <para>
- Within the system, SysVinit treats system components as services.
- These services are maintained as shell scripts stored in the
- <filename>/etc/init.d/</filename> directory.
- Services organize into different run levels.
- This organization is maintained by putting links to the services
- in the <filename>/etc/rcN.d/</filename> directories, where
- <replaceable>N/</replaceable> is one of the following options:
- "S", "0", "1", "2", "3", "4", "5", or "6".
- <note>
- Each runlevel has a dependency on the previous runlevel.
- This dependency allows the services to work properly.
- </note>
- </para>
-
- <para>
- In comparison, systemd treats components as units.
- Using units is a broader concept as compared to using a service.
- A unit includes several different types of entities.
- Service is one of the types of entities.
- The runlevel concept in SysVinit corresponds to the concept of a
- target in systemd, where target is also a type of supported unit.
- </para>
-
- <para>
- In a SysVinit-based system, services load sequentially (i.e. one
- by one) during and parallelization is not supported.
- With systemd, services start in parallel.
- Needless to say, the method can have an impact on system startup
- performance.
- </para>
-
- <para>
- If you want to use SysVinit, you do
- not have to do anything.
- But, if you want to use systemd, you must
- take some steps as described in the following sections.
- </para>
-
- <section id='using-systemd-exclusively'>
- <title>Using systemd Exclusively</title>
-
- <para>
- Set these variables in your distribution configuration
- file as follows:
- <literallayout class='monospaced'>
- DISTRO_FEATURES_append = " systemd"
- VIRTUAL-RUNTIME_init_manager = "systemd"
- </literallayout>
- You can also prevent the SysVinit
- distribution feature from
- being automatically enabled as follows:
- <literallayout class='monospaced'>
- DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit"
- </literallayout>
- Doing so removes any redundant SysVinit scripts.
- </para>
-
- <para>
- To remove initscripts from your image altogether,
- set this variable also:
- <literallayout class='monospaced'>
- VIRTUAL-RUNTIME_initscripts = ""
- </literallayout>
- </para>
-
- <para>
- For information on the backfill variable, see
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_FEATURES_BACKFILL_CONSIDERED'><filename>DISTRO_FEATURES_BACKFILL_CONSIDERED</filename></ulink>.
- </para>
- </section>
-
- <section id='using-systemd-for-the-main-image-and-using-sysvinit-for-the-rescue-image'>
- <title>Using systemd for the Main Image and Using SysVinit for the Rescue Image</title>
-
- <para>
- Set these variables in your distribution configuration
- file as follows:
- <literallayout class='monospaced'>
- DISTRO_FEATURES_append = " systemd"
- VIRTUAL-RUNTIME_init_manager = "systemd"
- </literallayout>
- Doing so causes your main image to use the
- <filename>packagegroup-core-boot.bb</filename> recipe and
- systemd.
- The rescue/minimal image cannot use this package group.
- However, it can install SysVinit
- and the appropriate packages will have support for both
- systemd and SysVinit.
- </para>
- </section>
- </section>
-
- <section id="selecting-dev-manager">
- <title>Selecting a Device Manager</title>
-
- <para>
- The Yocto Project provides multiple ways to manage the device
- manager (<filename>/dev</filename>):
- <itemizedlist>
- <listitem><para><emphasis>Persistent and Pre-Populated<filename>/dev</filename>:</emphasis>
- For this case, the <filename>/dev</filename> directory
- is persistent and the required device nodes are created
- during the build.
- </para></listitem>
- <listitem><para><emphasis>Use <filename>devtmpfs</filename> with a Device Manager:</emphasis>
- For this case, the <filename>/dev</filename> directory
- is provided by the kernel as an in-memory file system and
- is automatically populated by the kernel at runtime.
- Additional configuration of device nodes is done in user
- space by a device manager like
- <filename>udev</filename> or
- <filename>busybox-mdev</filename>.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <section id="static-dev-management">
- <title>Using Persistent and Pre-Populated<filename>/dev</filename></title>
-
- <para>
- To use the static method for device population, you need to
- set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-USE_DEVFS'><filename>USE_DEVFS</filename></ulink>
- variable to "0" as follows:
- <literallayout class='monospaced'>
- USE_DEVFS = "0"
- </literallayout>
- </para>
-
- <para>
- The content of the resulting <filename>/dev</filename>
- directory is defined in a Device Table file.
- The
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_DEVICE_TABLES'><filename>IMAGE_DEVICE_TABLES</filename></ulink>
- variable defines the Device Table to use and should be set
- in the machine or distro configuration file.
- Alternatively, you can set this variable in your
- <filename>local.conf</filename> configuration file.
- </para>
-
- <para>
- If you do not define the
- <filename>IMAGE_DEVICE_TABLES</filename> variable, the default
- <filename>device_table-minimal.txt</filename> is used:
- <literallayout class='monospaced'>
- IMAGE_DEVICE_TABLES = "device_table-mymachine.txt"
- </literallayout>
- </para>
-
- <para>
- The population is handled by the <filename>makedevs</filename>
- utility during image creation:
- </para>
- </section>
-
- <section id="devtmpfs-dev-management">
- <title>Using <filename>devtmpfs</filename> and a Device Manager</title>
-
- <para>
- To use the dynamic method for device population, you need to
- use (or be sure to set) the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-USE_DEVFS'><filename>USE_DEVFS</filename></ulink>
- variable to "1", which is the default:
- <literallayout class='monospaced'>
- USE_DEVFS = "1"
- </literallayout>
- With this setting, the resulting <filename>/dev</filename>
- directory is populated by the kernel using
- <filename>devtmpfs</filename>.
- Make sure the corresponding kernel configuration variable
- <filename>CONFIG_DEVTMPFS</filename> is set when building
- you build a Linux kernel.
- </para>
-
- <para>
- All devices created by <filename>devtmpfs</filename> will be
- owned by <filename>root</filename> and have permissions
- <filename>0600</filename>.
- </para>
-
- <para>
- To have more control over the device nodes, you can use a
- device manager like <filename>udev</filename> or
- <filename>busybox-mdev</filename>.
- You choose the device manager by defining the
- <filename>VIRTUAL-RUNTIME_dev_manager</filename> variable
- in your machine or distro configuration file.
- Alternatively, you can set this variable in your
- <filename>local.conf</filename> configuration file:
- <literallayout class='monospaced'>
- VIRTUAL-RUNTIME_dev_manager = "udev"
-
- # Some alternative values
- # VIRTUAL-RUNTIME_dev_manager = "busybox-mdev"
- # VIRTUAL-RUNTIME_dev_manager = "systemd"
- </literallayout>
- </para>
- </section>
- </section>
-
- <section id="platdev-appdev-srcrev">
- <title>Using an External SCM</title>
-
- <para>
- If you're working on a recipe that pulls from an external Source
- Code Manager (SCM), it is possible to have the OpenEmbedded build
- system notice new recipe changes added to the SCM and then build
- the resulting packages that depend on the new recipes by using
- the latest versions.
- This only works for SCMs from which it is possible to get a
- sensible revision number for changes.
- Currently, you can do this with Apache Subversion (SVN), Git, and
- Bazaar (BZR) repositories.
- </para>
-
- <para>
- To enable this behavior, the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>
- of the recipe needs to reference
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCPV'><filename>SRCPV</filename></ulink>.
- Here is an example:
- <literallayout class='monospaced'>
- PV = "1.2.3+git${SRCPV}"
- </literallayout>
- Then, you can add the following to your
- <filename>local.conf</filename>:
- <literallayout class='monospaced'>
- SRCREV_pn-<replaceable>PN</replaceable> = "${AUTOREV}"
- </literallayout>
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink>
- is the name of the recipe for which you want to enable automatic source
- revision updating.
- </para>
-
- <para>
- If you do not want to update your local configuration file, you can
- add the following directly to the recipe to finish enabling
- the feature:
- <literallayout class='monospaced'>
- SRCREV = "${AUTOREV}"
- </literallayout>
- </para>
-
- <para>
- The Yocto Project provides a distribution named
- <filename>poky-bleeding</filename>, whose configuration
- file contains the line:
- <literallayout class='monospaced'>
- require conf/distro/include/poky-floating-revisions.inc
- </literallayout>
- This line pulls in the listed include file that contains
- numerous lines of exactly that form:
- <literallayout class='monospaced'>
- #SRCREV_pn-opkg-native ?= "${AUTOREV}"
- #SRCREV_pn-opkg-sdk ?= "${AUTOREV}"
- #SRCREV_pn-opkg ?= "${AUTOREV}"
- #SRCREV_pn-opkg-utils-native ?= "${AUTOREV}"
- #SRCREV_pn-opkg-utils ?= "${AUTOREV}"
- SRCREV_pn-gconf-dbus ?= "${AUTOREV}"
- SRCREV_pn-matchbox-common ?= "${AUTOREV}"
- SRCREV_pn-matchbox-config-gtk ?= "${AUTOREV}"
- SRCREV_pn-matchbox-desktop ?= "${AUTOREV}"
- SRCREV_pn-matchbox-keyboard ?= "${AUTOREV}"
- SRCREV_pn-matchbox-panel-2 ?= "${AUTOREV}"
- SRCREV_pn-matchbox-themes-extra ?= "${AUTOREV}"
- SRCREV_pn-matchbox-terminal ?= "${AUTOREV}"
- SRCREV_pn-matchbox-wm ?= "${AUTOREV}"
- SRCREV_pn-settings-daemon ?= "${AUTOREV}"
- SRCREV_pn-screenshot ?= "${AUTOREV}"
- .
- .
- .
- </literallayout>
- These lines allow you to experiment with building a
- distribution that tracks the latest development source
- for numerous packages.
- <note><title>Caution</title>
- The <filename>poky-bleeding</filename> distribution
- is not tested on a regular basis.
- Keep this in mind if you use it.
- </note>
- </para>
- </section>
-
- <section id='creating-a-read-only-root-filesystem'>
- <title>Creating a Read-Only Root Filesystem</title>
-
- <para>
- Suppose, for security reasons, you need to disable
- your target device's root filesystem's write permissions
- (i.e. you need a read-only root filesystem).
- Or, perhaps you are running the device's operating system
- from a read-only storage device.
- For either case, you can customize your image for
- that behavior.
- </para>
-
- <note>
- Supporting a read-only root filesystem requires that the system and
- applications do not try to write to the root filesystem.
- You must configure all parts of the target system to write
- elsewhere, or to gracefully fail in the event of attempting to
- write to the root filesystem.
- </note>
-
- <section id='creating-the-root-filesystem'>
- <title>Creating the Root Filesystem</title>
-
- <para>
- To create the read-only root filesystem, simply add the
- "read-only-rootfs" feature to your image, normally in one of two ways.
- The first way is to add the "read-only-rootfs" image feature
- in the image's recipe file via the
- <filename>IMAGE_FEATURES</filename> variable:
- <literallayout class='monospaced'>
- IMAGE_FEATURES += "read-only-rootfs"
- </literallayout>
- As an alternative, you can add the same feature from within your
- build directory's <filename>local.conf</filename> file with the
- associated <filename>EXTRA_IMAGE_FEATURES</filename> variable, as in:
- <literallayout class='monospaced'>
- EXTRA_IMAGE_FEATURES = "read-only-rootfs"
- </literallayout>
- </para>
-
- <para>
- For more information on how to use these variables, see the
- "<link linkend='usingpoky-extend-customimage-imagefeatures'>Customizing Images Using Custom <filename>IMAGE_FEATURES</filename> and <filename>EXTRA_IMAGE_FEATURES</filename></link>"
- section.
- For information on the variables, see
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FEATURES'><filename>IMAGE_FEATURES</filename></ulink>
- and <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTRA_IMAGE_FEATURES'><filename>EXTRA_IMAGE_FEATURES</filename></ulink>.
- </para>
- </section>
-
- <section id='post-installation-scripts'>
- <title>Post-Installation Scripts</title>
-
- <para>
- It is very important that you make sure all
- post-Installation (<filename>pkg_postinst</filename>) scripts
- for packages that are installed into the image can be run
- at the time when the root filesystem is created during the
- build on the host system.
- These scripts cannot attempt to run during first-boot on the
- target device.
- With the "read-only-rootfs" feature enabled,
- the build system checks during root filesystem creation to make
- sure all post-installation scripts succeed.
- If any of these scripts still need to be run after the root
- filesystem is created, the build immediately fails.
- These build-time checks ensure that the build fails
- rather than the target device fails later during its
- initial boot operation.
- </para>
-
- <para>
- Most of the common post-installation scripts generated by the
- build system for the out-of-the-box Yocto Project are engineered
- so that they can run during root filesystem creation
- (e.g. post-installation scripts for caching fonts).
- However, if you create and add custom scripts, you need
- to be sure they can be run during this file system creation.
- </para>
-
- <para>
- Here are some common problems that prevent
- post-installation scripts from running during root filesystem
- creation:
- <itemizedlist>
- <listitem><para>
- <emphasis>Not using $D in front of absolute
- paths:</emphasis>
- The build system defines
- <filename>$</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-D'><filename>D</filename></ulink>
- when the root filesystem is created.
- Furthermore, <filename>$D</filename> is blank when the
- script is run on the target device.
- This implies two purposes for <filename>$D</filename>:
- ensuring paths are valid in both the host and target
- environments, and checking to determine which
- environment is being used as a method for taking
- appropriate actions.
- </para></listitem>
- <listitem><para>
- <emphasis>Attempting to run processes that are
- specific to or dependent on the target
- architecture:</emphasis>
- You can work around these attempts by using native
- tools, which run on the host system,
- to accomplish the same tasks, or
- by alternatively running the processes under QEMU,
- which has the <filename>qemu_run_binary</filename>
- function.
- For more information, see the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-qemu'><filename>qemu</filename></ulink>
- class.</para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='areas-with-write-access'>
- <title>Areas With Write Access</title>
-
- <para>
- With the "read-only-rootfs" feature enabled,
- any attempt by the target to write to the root filesystem at
- runtime fails.
- Consequently, you must make sure that you configure processes
- and applications that attempt these types of writes do so
- to directories with write access (e.g.
- <filename>/tmp</filename> or <filename>/var/run</filename>).
- </para>
- </section>
- </section>
-
-
-
-
- <section id='maintaining-build-output-quality'>
- <title>Maintaining Build Output Quality</title>
-
- <para>
- Many factors can influence the quality of a build.
- For example, if you upgrade a recipe to use a new version of an
- upstream software package or you experiment with some new
- configuration options, subtle changes can occur that you might
- not detect until later.
- Consider the case where your recipe is using a newer version of
- an upstream package.
- In this case, a new version of a piece of software might
- introduce an optional dependency on another library, which is
- auto-detected.
- If that library has already been built when the software is
- building, the software will link to the built library and that
- library will be pulled into your image along with the new
- software even if you did not want the library.
- </para>
-
- <para>
- The
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-buildhistory'><filename>buildhistory</filename></ulink>
- class exists to help you maintain the quality of your build
- output.
- You can use the class to highlight unexpected and possibly
- unwanted changes in the build output.
- When you enable build history, it records information about the
- contents of each package and image and then commits that
- information to a local Git repository where you can examine
- the information.
- </para>
-
- <para>
- The remainder of this section describes the following:
- <itemizedlist>
- <listitem><para>
- How you can enable and disable build history
- </para></listitem>
- <listitem><para>
- How to understand what the build history contains
- </para></listitem>
- <listitem><para>
- How to limit the information used for build history
- </para></listitem>
- <listitem><para>
- How to examine the build history from both a
- command-line and web interface
- </para></listitem>
- </itemizedlist>
- </para>
-
- <section id='enabling-and-disabling-build-history'>
- <title>Enabling and Disabling Build History</title>
-
- <para>
- Build history is disabled by default.
- To enable it, add the following <filename>INHERIT</filename>
- statement and set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BUILDHISTORY_COMMIT'><filename>BUILDHISTORY_COMMIT</filename></ulink>
- variable to "1" at the end of your
- <filename>conf/local.conf</filename> file found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
- <literallayout class='monospaced'>
- INHERIT += "buildhistory"
- BUILDHISTORY_COMMIT = "1"
- </literallayout>
- Enabling build history as previously described causes the
- OpenEmbedded build system to collect build output information
- and commit it as a single commit to a local
- <ulink url='&YOCTO_DOCS_OM_URL;#git'>Git</ulink>
- repository.
- <note>
- Enabling build history increases your build times slightly,
- particularly for images, and increases the amount of disk
- space used during the build.
- </note>
- </para>
-
- <para>
- You can disable build history by removing the previous
- statements from your <filename>conf/local.conf</filename>
- file.
- </para>
- </section>
-
- <section id='understanding-what-the-build-history-contains'>
- <title>Understanding What the Build History Contains</title>
-
- <para>
- Build history information is kept in
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-TOPDIR'><filename>TOPDIR</filename></ulink><filename>}/buildhistory</filename>
- in the Build Directory as defined by the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BUILDHISTORY_DIR'><filename>BUILDHISTORY_DIR</filename></ulink>
- variable.
- The following is an example abbreviated listing:
- <imagedata fileref="figures/buildhistory.png" align="center" width="6in" depth="4in" />
- </para>
-
- <para>
- At the top level, a <filename>metadata-revs</filename>
- file exists that lists the revisions of the repositories for
- the enabled layers when the build was produced.
- The rest of the data splits into separate
- <filename>packages</filename>, <filename>images</filename>
- and <filename>sdk</filename> directories, the contents of
- which are described as follows.
- </para>
-
- <section id='build-history-package-information'>
- <title>Build History Package Information</title>
-
- <para>
- The history for each package contains a text file that has
- name-value pairs with information about the package.
- For example,
- <filename>buildhistory/packages/i586-poky-linux/busybox/busybox/latest</filename>
- contains the following:
- <literallayout class='monospaced'>
- PV = 1.22.1
- PR = r32
- RPROVIDES =
- RDEPENDS = glibc (>= 2.20) update-alternatives-opkg
- RRECOMMENDS = busybox-syslog busybox-udhcpc update-rc.d
- PKGSIZE = 540168
- FILES = /usr/bin/* /usr/sbin/* /usr/lib/busybox/* /usr/lib/lib*.so.* \
- /etc /com /var /bin/* /sbin/* /lib/*.so.* /lib/udev/rules.d \
- /usr/lib/udev/rules.d /usr/share/busybox /usr/lib/busybox/* \
- /usr/share/pixmaps /usr/share/applications /usr/share/idl \
- /usr/share/omf /usr/share/sounds /usr/lib/bonobo/servers
- FILELIST = /bin/busybox /bin/busybox.nosuid /bin/busybox.suid /bin/sh \
- /etc/busybox.links.nosuid /etc/busybox.links.suid
- </literallayout>
- Most of these name-value pairs correspond to variables
- used to produce the package.
- The exceptions are <filename>FILELIST</filename>, which
- is the actual list of files in the package, and
- <filename>PKGSIZE</filename>, which is the total size of
- files in the package in bytes.
- </para>
-
- <para>
- A file also exists that corresponds to the recipe from
- which the package came (e.g.
- <filename>buildhistory/packages/i586-poky-linux/busybox/latest</filename>):
- <literallayout class='monospaced'>
- PV = 1.22.1
- PR = r32
- DEPENDS = initscripts kern-tools-native update-rc.d-native \
- virtual/i586-poky-linux-compilerlibs virtual/i586-poky-linux-gcc \
- virtual/libc virtual/update-alternatives
- PACKAGES = busybox-ptest busybox-httpd busybox-udhcpd busybox-udhcpc \
- busybox-syslog busybox-mdev busybox-hwclock busybox-dbg \
- busybox-staticdev busybox-dev busybox-doc busybox-locale busybox
- </literallayout>
- </para>
-
- <para>
- Finally, for those recipes fetched from a version control
- system (e.g., Git), a file exists that lists source
- revisions that are specified in the recipe and lists
- the actual revisions used during the build.
- Listed and actual revisions might differ when
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCREV'><filename>SRCREV</filename></ulink>
- is set to
- ${<ulink url='&YOCTO_DOCS_REF_URL;#var-AUTOREV'><filename>AUTOREV</filename></ulink>}.
- Here is an example assuming
- <filename>buildhistory/packages/qemux86-poky-linux/linux-yocto/latest_srcrev</filename>):
- <literallayout class='monospaced'>
- # SRCREV_machine = "38cd560d5022ed2dbd1ab0dca9642e47c98a0aa1"
- SRCREV_machine = "38cd560d5022ed2dbd1ab0dca9642e47c98a0aa1"
- # SRCREV_meta = "a227f20eff056e511d504b2e490f3774ab260d6f"
- SRCREV_meta = "a227f20eff056e511d504b2e490f3774ab260d6f"
- </literallayout>
- You can use the
- <filename>buildhistory-collect-srcrevs</filename>
- command with the <filename>-a</filename> option to
- collect the stored <filename>SRCREV</filename> values
- from build history and report them in a format suitable for
- use in global configuration (e.g.,
- <filename>local.conf</filename> or a distro include file)
- to override floating <filename>AUTOREV</filename> values
- to a fixed set of revisions.
- Here is some example output from this command:
- <literallayout class='monospaced'>
- $ buildhistory-collect-srcrevs -a
- # i586-poky-linux
- SRCREV_pn-glibc = "b8079dd0d360648e4e8de48656c5c38972621072"
- SRCREV_pn-glibc-initial = "b8079dd0d360648e4e8de48656c5c38972621072"
- SRCREV_pn-opkg-utils = "53274f087565fd45d8452c5367997ba6a682a37a"
- SRCREV_pn-kmod = "fd56638aed3fe147015bfa10ed4a5f7491303cb4"
- # x86_64-linux
- SRCREV_pn-gtk-doc-stub-native = "1dea266593edb766d6d898c79451ef193eb17cfa"
- SRCREV_pn-dtc-native = "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf"
- SRCREV_pn-update-rc.d-native = "eca680ddf28d024954895f59a241a622dd575c11"
- SRCREV_glibc_pn-cross-localedef-native = "b8079dd0d360648e4e8de48656c5c38972621072"
- SRCREV_localedef_pn-cross-localedef-native = "c833367348d39dad7ba018990bfdaffaec8e9ed3"
- SRCREV_pn-prelink-native = "faa069deec99bf61418d0bab831c83d7c1b797ca"
- SRCREV_pn-opkg-utils-native = "53274f087565fd45d8452c5367997ba6a682a37a"
- SRCREV_pn-kern-tools-native = "23345b8846fe4bd167efdf1bd8a1224b2ba9a5ff"
- SRCREV_pn-kmod-native = "fd56638aed3fe147015bfa10ed4a5f7491303cb4"
- # qemux86-poky-linux
- SRCREV_machine_pn-linux-yocto = "38cd560d5022ed2dbd1ab0dca9642e47c98a0aa1"
- SRCREV_meta_pn-linux-yocto = "a227f20eff056e511d504b2e490f3774ab260d6f"
- # all-poky-linux
- SRCREV_pn-update-rc.d = "eca680ddf28d024954895f59a241a622dd575c11"
- </literallayout>
- <note>
- Here are some notes on using the
- <filename>buildhistory-collect-srcrevs</filename>
- command:
- <itemizedlist>
- <listitem><para>
- By default, only values where the
- <filename>SRCREV</filename> was not hardcoded
- (usually when <filename>AUTOREV</filename>
- is used) are reported.
- Use the <filename>-a</filename> option to
- see all <filename>SRCREV</filename> values.
- </para></listitem>
- <listitem><para>
- The output statements might not have any effect
- if overrides are applied elsewhere in the
- build system configuration.
- Use the <filename>-f</filename> option to add
- the <filename>forcevariable</filename> override
- to each output line if you need to work around
- this restriction.
- </para></listitem>
- <listitem><para>
- The script does apply special handling when
- building for multiple machines.
- However, the script does place a comment before
- each set of values that specifies which
- triplet to which they belong as previously
- shown (e.g.,
- <filename>i586-poky-linux</filename>).
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
- </section>
-
- <section id='build-history-image-information'>
- <title>Build History Image Information</title>
-
- <para>
- The files produced for each image are as follows:
- <itemizedlist>
- <listitem><para>
- <filename>image-files:</filename>
- A directory containing selected files from the root
- filesystem.
- The files are defined by
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BUILDHISTORY_IMAGE_FILES'><filename>BUILDHISTORY_IMAGE_FILES</filename></ulink>.
- </para></listitem>
- <listitem><para>
- <filename>build-id.txt:</filename>
- Human-readable information about the build
- configuration and metadata source revisions.
- This file contains the full build header as printed
- by BitBake.
- </para></listitem>
- <listitem><para>
- <filename>*.dot:</filename>
- Dependency graphs for the image that are
- compatible with <filename>graphviz</filename>.
- </para></listitem>
- <listitem><para>
- <filename>files-in-image.txt:</filename>
- A list of files in the image with permissions,
- owner, group, size, and symlink information.
- </para></listitem>
- <listitem><para>
- <filename>image-info.txt:</filename>
- A text file containing name-value pairs with
- information about the image.
- See the following listing example for more
- information.
- </para></listitem>
- <listitem><para>
- <filename>installed-package-names.txt:</filename>
- A list of installed packages by name only.
- </para></listitem>
- <listitem><para>
- <filename>installed-package-sizes.txt:</filename>
- A list of installed packages ordered by size.
- </para></listitem>
- <listitem><para>
- <filename>installed-packages.txt:</filename>
- A list of installed packages with full package
- filenames.
- </para></listitem>
- </itemizedlist>
- <note>
- Installed package information is able to be gathered
- and produced even if package management is disabled
- for the final image.
- </note>
- </para>
-
- <para>
- Here is an example of <filename>image-info.txt</filename>:
- <literallayout class='monospaced'>
- DISTRO = poky
- DISTRO_VERSION = 1.7
- USER_CLASSES = buildstats image-mklibs image-prelink
- IMAGE_CLASSES = image_types
- IMAGE_FEATURES = debug-tweaks
- IMAGE_LINGUAS =
- IMAGE_INSTALL = packagegroup-core-boot run-postinsts
- BAD_RECOMMENDATIONS =
- NO_RECOMMENDATIONS =
- PACKAGE_EXCLUDE =
- ROOTFS_POSTPROCESS_COMMAND = write_package_manifest; license_create_manifest; \
- write_image_manifest ; buildhistory_list_installed_image ; \
- buildhistory_get_image_installed ; ssh_allow_empty_password; \
- postinst_enable_logging; rootfs_update_timestamp ; ssh_disable_dns_lookup ;
- IMAGE_POSTPROCESS_COMMAND = buildhistory_get_imageinfo ;
- IMAGESIZE = 6900
- </literallayout>
- Other than <filename>IMAGESIZE</filename>, which is the
- total size of the files in the image in Kbytes, the
- name-value pairs are variables that may have influenced the
- content of the image.
- This information is often useful when you are trying to
- determine why a change in the package or file
- listings has occurred.
- </para>
- </section>
-
- <section id='using-build-history-to-gather-image-information-only'>
- <title>Using Build History to Gather Image Information Only</title>
-
- <para>
- As you can see, build history produces image information,
- including dependency graphs, so you can see why something
- was pulled into the image.
- If you are just interested in this information and not
- interested in collecting specific package or SDK
- information, you can enable writing only image information
- without any history by adding the following to your
- <filename>conf/local.conf</filename> file found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
- <literallayout class='monospaced'>
- INHERIT += "buildhistory"
- BUILDHISTORY_COMMIT = "0"
- BUILDHISTORY_FEATURES = "image"
- </literallayout>
- Here, you set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BUILDHISTORY_FEATURES'><filename>BUILDHISTORY_FEATURES</filename></ulink>
- variable to use the image feature only.
- </para>
- </section>
-
- <section id='build-history-sdk-information'>
- <title>Build History SDK Information</title>
-
- <para>
- Build history collects similar information on the contents
- of SDKs
- (e.g. <filename>bitbake -c populate_sdk imagename</filename>)
- as compared to information it collects for images.
- Furthermore, this information differs depending on whether
- an extensible or standard SDK is being produced.
- </para>
-
- <para>
- The following list shows the files produced for SDKs:
- <itemizedlist>
- <listitem><para>
- <filename>files-in-sdk.txt:</filename>
- A list of files in the SDK with permissions,
- owner, group, size, and symlink information.
- This list includes both the host and target parts
- of the SDK.
- </para></listitem>
- <listitem><para>
- <filename>sdk-info.txt:</filename>
- A text file containing name-value pairs with
- information about the SDK.
- See the following listing example for more
- information.
- </para></listitem>
- <listitem><para>
- <filename>sstate-task-sizes.txt:</filename>
- A text file containing name-value pairs with
- information about task group sizes
- (e.g. <filename>do_populate_sysroot</filename>
- tasks have a total size).
- The <filename>sstate-task-sizes.txt</filename> file
- exists only when an extensible SDK is created.
- </para></listitem>
- <listitem><para>
- <filename>sstate-package-sizes.txt:</filename>
- A text file containing name-value pairs with
- information for the shared-state packages and
- sizes in the SDK.
- The <filename>sstate-package-sizes.txt</filename>
- file exists only when an extensible SDK is created.
- </para></listitem>
- <listitem><para>
- <filename>sdk-files:</filename>
- A folder that contains copies of the files
- mentioned in
- <filename>BUILDHISTORY_SDK_FILES</filename> if the
- files are present in the output.
- Additionally, the default value of
- <filename>BUILDHISTORY_SDK_FILES</filename> is
- specific to the extensible SDK although you can
- set it differently if you would like to pull in
- specific files from the standard SDK.</para>
-
- <para>The default files are
- <filename>conf/local.conf</filename>,
- <filename>conf/bblayers.conf</filename>,
- <filename>conf/auto.conf</filename>,
- <filename>conf/locked-sigs.inc</filename>, and
- <filename>conf/devtool.conf</filename>.
- Thus, for an extensible SDK, these files get
- copied into the <filename>sdk-files</filename>
- directory.
- </para></listitem>
- <listitem><para>
- The following information appears under
- each of the <filename>host</filename>
- and <filename>target</filename> directories
- for the portions of the SDK that run on the host
- and on the target, respectively:
- <note>
- The following files for the most part are empty
- when producing an extensible SDK because this
- type of SDK is not constructed from packages
- as is the standard SDK.
- </note>
- <itemizedlist>
- <listitem><para>
- <filename>depends.dot:</filename>
- Dependency graph for the SDK that is
- compatible with
- <filename>graphviz</filename>.
- </para></listitem>
- <listitem><para>
- <filename>installed-package-names.txt:</filename>
- A list of installed packages by name only.
- </para></listitem>
- <listitem><para>
- <filename>installed-package-sizes.txt:</filename>
- A list of installed packages ordered by size.
- </para></listitem>
- <listitem><para>
- <filename>installed-packages.txt:</filename>
- A list of installed packages with full
- package filenames.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Here is an example of <filename>sdk-info.txt</filename>:
- <literallayout class='monospaced'>
- DISTRO = poky
- DISTRO_VERSION = 1.3+snapshot-20130327
- SDK_NAME = poky-glibc-i686-arm
- SDK_VERSION = 1.3+snapshot
- SDKMACHINE =
- SDKIMAGE_FEATURES = dev-pkgs dbg-pkgs
- BAD_RECOMMENDATIONS =
- SDKSIZE = 352712
- </literallayout>
- Other than <filename>SDKSIZE</filename>, which is the
- total size of the files in the SDK in Kbytes, the
- name-value pairs are variables that might have influenced
- the content of the SDK.
- This information is often useful when you are trying to
- determine why a change in the package or file listings
- has occurred.
- </para>
- </section>
-
- <section id='examining-build-history-information'>
- <title>Examining Build History Information</title>
-
- <para>
- You can examine build history output from the command
- line or from a web interface.
- </para>
-
- <para>
- To see any changes that have occurred (assuming you have
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BUILDHISTORY_COMMIT'><filename>BUILDHISTORY_COMMIT</filename></ulink><filename>&nbsp;= "1"</filename>),
- you can simply use any Git command that allows you to
- view the history of a repository.
- Here is one method:
- <literallayout class='monospaced'>
- $ git log -p
- </literallayout>
- You need to realize, however, that this method does show
- changes that are not significant (e.g. a package's size
- changing by a few bytes).
- </para>
-
- <para>
- A command-line tool called
- <filename>buildhistory-diff</filename> does exist, though,
- that queries the Git repository and prints just the
- differences that might be significant in human-readable
- form.
- Here is an example:
- <literallayout class='monospaced'>
- $ ~/poky/poky/scripts/buildhistory-diff . HEAD^
- Changes to images/qemux86_64/glibc/core-image-minimal (files-in-image.txt):
- /etc/anotherpkg.conf was added
- /sbin/anotherpkg was added
- * (installed-package-names.txt):
- * anotherpkg was added
- Changes to images/qemux86_64/glibc/core-image-minimal (installed-package-names.txt):
- anotherpkg was added
- packages/qemux86_64-poky-linux/v86d: PACKAGES: added "v86d-extras"
- * PR changed from "r0" to "r1"
- * PV changed from "0.1.10" to "0.1.12"
- packages/qemux86_64-poky-linux/v86d/v86d: PKGSIZE changed from 110579 to 144381 (+30%)
- * PR changed from "r0" to "r1"
- * PV changed from "0.1.10" to "0.1.12"
- </literallayout>
- <note>
- The <filename>buildhistory-diff</filename> tool
- requires the <filename>GitPython</filename> package.
- Be sure to install it using Pip3 as follows:
- <literallayout class='monospaced'>
- $ pip3 install GitPython --user
- </literallayout>
- Alternatively, you can install
- <filename>python3-git</filename> using the appropriate
- distribution package manager (e.g.
- <filename>apt-get</filename>, <filename>dnf</filename>,
- or <filename>zipper</filename>).
- </note>
- </para>
-
- <para>
- To see changes to the build history using a web interface,
- follow the instruction in the <filename>README</filename>
- file here.
- <ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/buildhistory-web/'></ulink>.
- </para>
-
- <para>
- Here is a sample screenshot of the interface:
- <imagedata fileref="figures/buildhistory-web.png" align="center" scalefit="1" width="130%" contentdepth="130%" />
- </para>
- </section>
- </section>
- </section>
-
- <section id="performing-automated-runtime-testing">
- <title>Performing Automated Runtime Testing</title>
-
- <para>
- The OpenEmbedded build system makes available a series of automated
- tests for images to verify runtime functionality.
- You can run these tests on either QEMU or actual target hardware.
- Tests are written in Python making use of the
- <filename>unittest</filename> module, and the majority of them
- run commands on the target system over SSH.
- This section describes how you set up the environment to use these
- tests, run available tests, and write and add your own tests.
- </para>
-
- <para>
- For information on the test and QA infrastructure available
- within the Yocto Project, see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#testing-and-quality-assurance'>Testing and Quality Assurance</ulink>"
- section in the Yocto Project Reference Manual.
- </para>
-
- <section id='enabling-tests'>
- <title>Enabling Tests</title>
-
- <para>
- Depending on whether you are planning to run tests using
- QEMU or on the hardware, you have to take
- different steps to enable the tests.
- See the following subsections for information on how to
- enable both types of tests.
- </para>
-
- <section id='qemu-image-enabling-tests'>
- <title>Enabling Runtime Tests on QEMU</title>
-
- <para>
- In order to run tests, you need to do the following:
- <itemizedlist>
- <listitem><para><emphasis>Set up to avoid interaction
- with <filename>sudo</filename> for networking:</emphasis>
- To accomplish this, you must do one of the
- following:
- <itemizedlist>
- <listitem><para>Add
- <filename>NOPASSWD</filename> for your user
- in <filename>/etc/sudoers</filename> either for
- all commands or just for
- <filename>runqemu-ifup</filename>.
- You must provide the full path as that can
- change if you are using multiple clones of the
- source repository.
- <note>
- On some distributions, you also need to
- comment out "Defaults requiretty" in
- <filename>/etc/sudoers</filename>.
- </note></para></listitem>
- <listitem><para>Manually configure a tap interface
- for your system.</para></listitem>
- <listitem><para>Run as root the script in
- <filename>scripts/runqemu-gen-tapdevs</filename>,
- which should generate a list of tap devices.
- This is the option typically chosen for
- Autobuilder-type environments.
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- Be sure to use an absolute path
- when calling this script
- with sudo.
- </para></listitem>
- <listitem><para>
- The package recipe
- <filename>qemu-helper-native</filename>
- is required to run this script.
- Build the package using the
- following command:
- <literallayout class='monospaced'>
- $ bitbake qemu-helper-native
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </note>
- </para></listitem>
- </itemizedlist></para></listitem>
- <listitem><para><emphasis>Set the
- <filename>DISPLAY</filename> variable:</emphasis>
- You need to set this variable so that you have an X
- server available (e.g. start
- <filename>vncserver</filename> for a headless machine).
- </para></listitem>
- <listitem><para><emphasis>Be sure your host's firewall
- accepts incoming connections from
- 192.168.7.0/24:</emphasis>
- Some of the tests (in particular DNF tests) start
- an HTTP server on a random high number port,
- which is used to serve files to the target.
- The DNF module serves
- <filename>${WORKDIR}/oe-rootfs-repo</filename>
- so it can run DNF channel commands.
- That means your host's firewall
- must accept incoming connections from 192.168.7.0/24,
- which is the default IP range used for tap devices
- by <filename>runqemu</filename>.</para></listitem>
- <listitem><para><emphasis>Be sure your host has the
- correct packages installed:</emphasis>
- Depending your host's distribution, you need
- to have the following packages installed:
- <itemizedlist>
- <listitem><para>Ubuntu and Debian:
- <filename>sysstat</filename> and
- <filename>iproute2</filename>
- </para></listitem>
- <listitem><para>OpenSUSE:
- <filename>sysstat</filename> and
- <filename>iproute2</filename>
- </para></listitem>
- <listitem><para>Fedora:
- <filename>sysstat</filename> and
- <filename>iproute</filename>
- </para></listitem>
- <listitem><para>CentOS:
- <filename>sysstat</filename> and
- <filename>iproute</filename>
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Once you start running the tests, the following happens:
- <orderedlist>
- <listitem><para>A copy of the root filesystem is written
- to <filename>${WORKDIR}/testimage</filename>.
- </para></listitem>
- <listitem><para>The image is booted under QEMU using the
- standard <filename>runqemu</filename> script.
- </para></listitem>
- <listitem><para>A default timeout of 500 seconds occurs
- to allow for the boot process to reach the login prompt.
- You can change the timeout period by setting
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TEST_QEMUBOOT_TIMEOUT'><filename>TEST_QEMUBOOT_TIMEOUT</filename></ulink>
- in the <filename>local.conf</filename> file.
- </para></listitem>
- <listitem><para>Once the boot process is reached and the
- login prompt appears, the tests run.
- The full boot log is written to
- <filename>${WORKDIR}/testimage/qemu_boot_log</filename>.
- </para></listitem>
- <listitem><para>Each test module loads in the order found
- in <filename>TEST_SUITES</filename>.
- You can find the full output of the commands run over
- SSH in
- <filename>${WORKDIR}/testimgage/ssh_target_log</filename>.
- </para></listitem>
- <listitem><para>If no failures occur, the task running the
- tests ends successfully.
- You can find the output from the
- <filename>unittest</filename> in the task log at
- <filename>${WORKDIR}/temp/log.do_testimage</filename>.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='hardware-image-enabling-tests'>
- <title>Enabling Runtime Tests on Hardware</title>
-
- <para>
- The OpenEmbedded build system can run tests on real
- hardware, and for certain devices it can also deploy
- the image to be tested onto the device beforehand.
- </para>
-
- <para>
- For automated deployment, a "master image" is installed
- onto the hardware once as part of setup.
- Then, each time tests are to be run, the following
- occurs:
- <orderedlist>
- <listitem><para>The master image is booted into and
- used to write the image to be tested to
- a second partition.
- </para></listitem>
- <listitem><para>The device is then rebooted using an
- external script that you need to provide.
- </para></listitem>
- <listitem><para>The device boots into the image to be
- tested.
- </para></listitem>
- </orderedlist>
- </para>
-
- <para>
- When running tests (independent of whether the image
- has been deployed automatically or not), the device is
- expected to be connected to a network on a
- pre-determined IP address.
- You can either use static IP addresses written into
- the image, or set the image to use DHCP and have your
- DHCP server on the test network assign a known IP address
- based on the MAC address of the device.
- </para>
-
- <para>
- In order to run tests on hardware, you need to set
- <filename>TEST_TARGET</filename> to an appropriate value.
- For QEMU, you do not have to change anything, the default
- value is "qemu".
- For running tests on hardware, the following options exist:
- <itemizedlist>
- <listitem><para><emphasis>"simpleremote":</emphasis>
- Choose "simpleremote" if you are going to
- run tests on a target system that is already
- running the image to be tested and is available
- on the network.
- You can use "simpleremote" in conjunction
- with either real hardware or an image running
- within a separately started QEMU or any
- other virtual machine manager.
- </para></listitem>
- <listitem><para><emphasis>"SystemdbootTarget":</emphasis>
- Choose "SystemdbootTarget" if your hardware is
- an EFI-based machine with
- <filename>systemd-boot</filename> as bootloader and
- <filename>core-image-testmaster</filename>
- (or something similar) is installed.
- Also, your hardware under test must be in a
- DHCP-enabled network that gives it the same IP
- address for each reboot.</para>
- <para>If you choose "SystemdbootTarget", there are
- additional requirements and considerations.
- See the
- "<link linkend='selecting-systemdboottarget'>Selecting SystemdbootTarget</link>"
- section, which follows, for more information.
- </para></listitem>
- <listitem><para><emphasis>"BeagleBoneTarget":</emphasis>
- Choose "BeagleBoneTarget" if you are deploying
- images and running tests on the BeagleBone
- "Black" or original "White" hardware.
- For information on how to use these tests, see the
- comments at the top of the BeagleBoneTarget
- <filename>meta-yocto-bsp/lib/oeqa/controllers/beaglebonetarget.py</filename>
- file.
- </para></listitem>
- <listitem><para><emphasis>"EdgeRouterTarget":</emphasis>
- Choose "EdgeRouterTarget" is you are deploying
- images and running tests on the Ubiquiti Networks
- EdgeRouter Lite.
- For information on how to use these tests, see the
- comments at the top of the EdgeRouterTarget
- <filename>meta-yocto-bsp/lib/oeqa/controllers/edgeroutertarget.py</filename>
- file.
- </para></listitem>
- <listitem><para><emphasis>"GrubTarget":</emphasis>
- Choose the "supports deploying images and running
- tests on any generic PC that boots using GRUB.
- For information on how to use these tests, see the
- comments at the top of the GrubTarget
- <filename>meta-yocto-bsp/lib/oeqa/controllers/grubtarget.py</filename>
- file.
- </para></listitem>
- <listitem><para><emphasis>"<replaceable>your-target</replaceable>":</emphasis>
- Create your own custom target if you want to run
- tests when you are deploying images and running
- tests on a custom machine within your BSP layer.
- To do this, you need to add a Python unit that
- defines the target class under
- <filename>lib/oeqa/controllers/</filename> within
- your layer.
- You must also provide an empty
- <filename>__init__.py</filename>.
- For examples, see files in
- <filename>meta-yocto-bsp/lib/oeqa/controllers/</filename>.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='selecting-systemdboottarget'>
- <title>Selecting SystemdbootTarget</title>
-
- <para>
- If you did not set <filename>TEST_TARGET</filename> to
- "SystemdbootTarget", then you do not need any information
- in this section.
- You can skip down to the
- "<link linkend='qemu-image-running-tests'>Running Tests</link>"
- section.
- </para>
-
- <para>
- If you did set <filename>TEST_TARGET</filename> to
- "SystemdbootTarget", you also need to perform a one-time
- setup of your master image by doing the following:
- <orderedlist>
- <listitem><para><emphasis>Set <filename>EFI_PROVIDER</filename>:</emphasis>
- Be sure that <filename>EFI_PROVIDER</filename>
- is as follows:
- <literallayout class='monospaced'>
- EFI_PROVIDER = "systemd-boot"
- </literallayout>
- </para></listitem>
- <listitem><para><emphasis>Build the master image:</emphasis>
- Build the <filename>core-image-testmaster</filename>
- image.
- The <filename>core-image-testmaster</filename>
- recipe is provided as an example for a
- "master" image and you can customize the image
- recipe as you would any other recipe.
- </para>
- <para>Here are the image recipe requirements:
- <itemizedlist>
- <listitem><para>Inherits
- <filename>core-image</filename>
- so that kernel modules are installed.
- </para></listitem>
- <listitem><para>Installs normal linux utilities
- not busybox ones (e.g.
- <filename>bash</filename>,
- <filename>coreutils</filename>,
- <filename>tar</filename>,
- <filename>gzip</filename>, and
- <filename>kmod</filename>).
- </para></listitem>
- <listitem><para>Uses a custom
- Initial RAM Disk (initramfs) image with a
- custom installer.
- A normal image that you can install usually
- creates a single rootfs partition.
- This image uses another installer that
- creates a specific partition layout.
- Not all Board Support Packages (BSPs)
- can use an installer.
- For such cases, you need to manually create
- the following partition layout on the
- target:
- <itemizedlist>
- <listitem><para>First partition mounted
- under <filename>/boot</filename>,
- labeled "boot".
- </para></listitem>
- <listitem><para>The main rootfs
- partition where this image gets
- installed, which is mounted under
- <filename>/</filename>.
- </para></listitem>
- <listitem><para>Another partition
- labeled "testrootfs" where test
- images get deployed.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para><emphasis>Install image:</emphasis>
- Install the image that you just built on the target
- system.
- </para></listitem>
- </orderedlist>
- </para>
-
- <para>
- The final thing you need to do when setting
- <filename>TEST_TARGET</filename> to "SystemdbootTarget" is
- to set up the test image:
- <orderedlist>
- <listitem><para><emphasis>Set up your <filename>local.conf</filename> file:</emphasis>
- Make sure you have the following statements in
- your <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- IMAGE_FSTYPES += "tar.gz"
- INHERIT += "testimage"
- TEST_TARGET = "SystemdbootTarget"
- TEST_TARGET_IP = "192.168.2.3"
- </literallayout>
- </para></listitem>
- <listitem><para><emphasis>Build your test image:</emphasis>
- Use BitBake to build the image:
- <literallayout class='monospaced'>
- $ bitbake core-image-sato
- </literallayout>
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='power-control'>
- <title>Power Control</title>
-
- <para>
- For most hardware targets other than "simpleremote",
- you can control power:
- <itemizedlist>
- <listitem><para>
- You can use
- <filename>TEST_POWERCONTROL_CMD</filename>
- together with
- <filename>TEST_POWERCONTROL_EXTRA_ARGS</filename>
- as a command that runs on the host and does power
- cycling.
- The test code passes one argument to that command:
- off, on or cycle (off then on).
- Here is an example that could appear in your
- <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- TEST_POWERCONTROL_CMD = "powercontrol.exp test 10.11.12.1 nuc1"
- </literallayout>
- In this example, the expect script does the
- following:
- <literallayout class='monospaced'>
- ssh test@10.11.12.1 "pyctl nuc1 <replaceable>arg</replaceable>"
- </literallayout>
- It then runs a Python script that controls power
- for a label called <filename>nuc1</filename>.
- <note>
- You need to customize
- <filename>TEST_POWERCONTROL_CMD</filename>
- and
- <filename>TEST_POWERCONTROL_EXTRA_ARGS</filename>
- for your own setup.
- The one requirement is that it accepts
- "on", "off", and "cycle" as the last argument.
- </note>
- </para></listitem>
- <listitem><para>
- When no command is defined, it connects to the
- device over SSH and uses the classic reboot command
- to reboot the device.
- Classic reboot is fine as long as the machine
- actually reboots (i.e. the SSH test has not
- failed).
- It is useful for scenarios where you have a simple
- setup, typically with a single board, and where
- some manual interaction is okay from time to time.
- </para></listitem>
- </itemizedlist>
- If you have no hardware to automatically perform power
- control but still wish to experiment with automated
- hardware testing, you can use the dialog-power-control
- script that shows a dialog prompting you to perform the
- required power action.
- This script requires either KDialog or Zenity to be
- installed.
- To use this script, set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TEST_POWERCONTROL_CMD'><filename>TEST_POWERCONTROL_CMD</filename></ulink>
- variable as follows:
- <literallayout class='monospaced'>
- TEST_POWERCONTROL_CMD = "${COREBASE}/scripts/contrib/dialog-power-control"
- </literallayout>
- </para>
- </section>
-
- <section id='serial-console-connection'>
- <title>Serial Console Connection</title>
-
- <para>
- For test target classes requiring a serial console
- to interact with the bootloader (e.g. BeagleBoneTarget,
- EdgeRouterTarget, and GrubTarget), you need to
- specify a command to use to connect to the serial console
- of the target machine by using the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TEST_SERIALCONTROL_CMD'><filename>TEST_SERIALCONTROL_CMD</filename></ulink>
- variable and optionally the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TEST_SERIALCONTROL_EXTRA_ARGS'><filename>TEST_SERIALCONTROL_EXTRA_ARGS</filename></ulink>
- variable.
- </para>
-
- <para>
- These cases could be a serial terminal program if the
- machine is connected to a local serial port, or a
- <filename>telnet</filename> or
- <filename>ssh</filename> command connecting to a remote
- console server.
- Regardless of the case, the command simply needs to
- connect to the serial console and forward that connection
- to standard input and output as any normal terminal
- program does.
- For example, to use the picocom terminal program on
- serial device <filename>/dev/ttyUSB0</filename>
- at 115200bps, you would set the variable as follows:
- <literallayout class='monospaced'>
- TEST_SERIALCONTROL_CMD = "picocom /dev/ttyUSB0 -b 115200"
- </literallayout>
- For local devices where the serial port device disappears
- when the device reboots, an additional "serdevtry" wrapper
- script is provided.
- To use this wrapper, simply prefix the terminal command
- with
- <filename>${COREBASE}/scripts/contrib/serdevtry</filename>:
- <literallayout class='monospaced'>
- TEST_SERIALCONTROL_CMD = "${COREBASE}/scripts/contrib/serdevtry picocom -b
-115200 /dev/ttyUSB0"
- </literallayout>
- </para>
- </section>
- </section>
-
- <section id="qemu-image-running-tests">
- <title>Running Tests</title>
-
- <para>
- You can start the tests automatically or manually:
- <itemizedlist>
- <listitem><para><emphasis>Automatically running tests:</emphasis>
- To run the tests automatically after the
- OpenEmbedded build system successfully creates an image,
- first set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TESTIMAGE_AUTO'><filename>TESTIMAGE_AUTO</filename></ulink>
- variable to "1" in your <filename>local.conf</filename>
- file in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
- <literallayout class='monospaced'>
- TESTIMAGE_AUTO = "1"
- </literallayout>
- Next, build your image.
- If the image successfully builds, the tests run:
- <literallayout class='monospaced'>
- bitbake core-image-sato
- </literallayout></para></listitem>
- <listitem><para><emphasis>Manually running tests:</emphasis>
- To manually run the tests, first globally inherit the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-testimage*'><filename>testimage</filename></ulink>
- class by editing your <filename>local.conf</filename>
- file:
- <literallayout class='monospaced'>
- INHERIT += "testimage"
- </literallayout>
- Next, use BitBake to run the tests:
- <literallayout class='monospaced'>
- bitbake -c testimage <replaceable>image</replaceable>
- </literallayout></para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- All test files reside in
- <filename>meta/lib/oeqa/runtime</filename> in the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
- A test name maps directly to a Python module.
- Each test module may contain a number of individual tests.
- Tests are usually grouped together by the area
- tested (e.g tests for systemd reside in
- <filename>meta/lib/oeqa/runtime/systemd.py</filename>).
- </para>
-
- <para>
- You can add tests to any layer provided you place them in the
- proper area and you extend
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBPATH'><filename>BBPATH</filename></ulink>
- in the <filename>local.conf</filename> file as normal.
- Be sure that tests reside in
- <filename><replaceable>layer</replaceable>/lib/oeqa/runtime</filename>.
- <note>
- Be sure that module names do not collide with module names
- used in the default set of test modules in
- <filename>meta/lib/oeqa/runtime</filename>.
- </note>
- </para>
-
- <para>
- You can change the set of tests run by appending or overriding
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TEST_SUITES'><filename>TEST_SUITES</filename></ulink>
- variable in <filename>local.conf</filename>.
- Each name in <filename>TEST_SUITES</filename> represents a
- required test for the image.
- Test modules named within <filename>TEST_SUITES</filename>
- cannot be skipped even if a test is not suitable for an image
- (e.g. running the RPM tests on an image without
- <filename>rpm</filename>).
- Appending "auto" to <filename>TEST_SUITES</filename> causes the
- build system to try to run all tests that are suitable for the
- image (i.e. each test module may elect to skip itself).
- </para>
-
- <para>
- The order you list tests in <filename>TEST_SUITES</filename>
- is important and influences test dependencies.
- Consequently, tests that depend on other tests should be added
- after the test on which they depend.
- For example, since the <filename>ssh</filename> test
- depends on the
- <filename>ping</filename> test, "ssh" needs to come after
- "ping" in the list.
- The test class provides no re-ordering or dependency handling.
- <note>
- Each module can have multiple classes with multiple test
- methods.
- And, Python <filename>unittest</filename> rules apply.
- </note>
- </para>
-
- <para>
- Here are some things to keep in mind when running tests:
- <itemizedlist>
- <listitem><para>The default tests for the image are defined
- as:
- <literallayout class='monospaced'>
- DEFAULT_TEST_SUITES_pn-<replaceable>image</replaceable> = "ping ssh df connman syslog xorg scp vnc date rpm dnf dmesg"
- </literallayout></para></listitem>
- <listitem><para>Add your own test to the list of the
- by using the following:
- <literallayout class='monospaced'>
- TEST_SUITES_append = " mytest"
- </literallayout></para></listitem>
- <listitem><para>Run a specific list of tests as follows:
- <literallayout class='monospaced'>
- TEST_SUITES = "test1 test2 test3"
- </literallayout>
- Remember, order is important.
- Be sure to place a test that is dependent on another test
- later in the order.</para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id="exporting-tests">
- <title>Exporting Tests</title>
-
- <para>
- You can export tests so that they can run independently of
- the build system.
- Exporting tests is required if you want to be able to hand
- the test execution off to a scheduler.
- You can only export tests that are defined in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TEST_SUITES'><filename>TEST_SUITES</filename></ulink>.
- </para>
-
- <para>
- If your image is already built, make sure the following are set
- in your <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- INHERIT +="testexport"
- TEST_TARGET_IP = "<replaceable>IP-address-for-the-test-target</replaceable>"
- TEST_SERVER_IP = "<replaceable>IP-address-for-the-test-server</replaceable>"
- </literallayout>
- You can then export the tests with the following BitBake
- command form:
- <literallayout class='monospaced'>
- $ bitbake <replaceable>image</replaceable> -c testexport
- </literallayout>
- Exporting the tests places them in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
- in
- <filename>tmp/testexport/</filename><replaceable>image</replaceable>,
- which is controlled by the
- <filename>TEST_EXPORT_DIR</filename> variable.
- </para>
-
- <para>
- You can now run the tests outside of the build environment:
- <literallayout class='monospaced'>
- $ cd tmp/testexport/<replaceable>image</replaceable>
- $ ./runexported.py testdata.json
- </literallayout>
- </para>
-
- <para>
- Here is a complete example that shows IP addresses and uses
- the <filename>core-image-sato</filename> image:
- <literallayout class='monospaced'>
- INHERIT +="testexport"
- TEST_TARGET_IP = "192.168.7.2"
- TEST_SERVER_IP = "192.168.7.1"
- </literallayout>
- Use BitBake to export the tests:
- <literallayout class='monospaced'>
- $ bitbake core-image-sato -c testexport
- </literallayout>
- Run the tests outside of the build environment using the
- following:
- <literallayout class='monospaced'>
- $ cd tmp/testexport/core-image-sato
- $ ./runexported.py testdata.json
- </literallayout>
- </para>
- </section>
-
- <section id="qemu-image-writing-new-tests">
- <title>Writing New Tests</title>
-
- <para>
- As mentioned previously, all new test files need to be in the
- proper place for the build system to find them.
- New tests for additional functionality outside of the core
- should be added to the layer that adds the functionality, in
- <filename><replaceable>layer</replaceable>/lib/oeqa/runtime</filename>
- (as long as
- <ulink url='&YOCTO_DOCS_REF_URL;#var-BBPATH'><filename>BBPATH</filename></ulink>
- is extended in the layer's
- <filename>layer.conf</filename> file as normal).
- Just remember the following:
- <itemizedlist>
- <listitem><para>Filenames need to map directly to test
- (module) names.
- </para></listitem>
- <listitem><para>Do not use module names that
- collide with existing core tests.
- </para></listitem>
- <listitem><para>Minimally, an empty
- <filename>__init__.py</filename> file must exist
- in the runtime directory.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- To create a new test, start by copying an existing module
- (e.g. <filename>syslog.py</filename> or
- <filename>gcc.py</filename> are good ones to use).
- Test modules can use code from
- <filename>meta/lib/oeqa/utils</filename>, which are helper
- classes.
- </para>
-
- <note>
- Structure shell commands such that you rely on them and they
- return a single code for success.
- Be aware that sometimes you will need to parse the output.
- See the <filename>df.py</filename> and
- <filename>date.py</filename> modules for examples.
- </note>
-
- <para>
- You will notice that all test classes inherit
- <filename>oeRuntimeTest</filename>, which is found in
- <filename>meta/lib/oetest.py</filename>.
- This base class offers some helper attributes, which are
- described in the following sections:
- </para>
-
- <section id='qemu-image-writing-tests-class-methods'>
- <title>Class Methods</title>
-
- <para>
- Class methods are as follows:
- <itemizedlist>
- <listitem><para><emphasis><filename>hasPackage(pkg)</filename>:</emphasis>
- Returns "True" if <filename>pkg</filename> is in the
- installed package list of the image, which is based
- on the manifest file that is generated during the
- <filename>do_rootfs</filename> task.
- </para></listitem>
- <listitem><para><emphasis><filename>hasFeature(feature)</filename>:</emphasis>
- Returns "True" if the feature is in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FEATURES'><filename>IMAGE_FEATURES</filename></ulink>
- or
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO_FEATURES'><filename>DISTRO_FEATURES</filename></ulink>.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='qemu-image-writing-tests-class-attributes'>
- <title>Class Attributes</title>
-
- <para>
- Class attributes are as follows:
- <itemizedlist>
- <listitem><para><emphasis><filename>pscmd</filename>:</emphasis>
- Equals "ps -ef" if <filename>procps</filename> is
- installed in the image.
- Otherwise, <filename>pscmd</filename> equals
- "ps" (busybox).
- </para></listitem>
- <listitem><para><emphasis><filename>tc</filename>:</emphasis>
- The called test context, which gives access to the
- following attributes:
- <itemizedlist>
- <listitem><para><emphasis><filename>d</filename>:</emphasis>
- The BitBake datastore, which allows you to
- use stuff such as
- <filename>oeRuntimeTest.tc.d.getVar("VIRTUAL-RUNTIME_init_manager")</filename>.
- </para></listitem>
- <listitem><para><emphasis><filename>testslist</filename> and <filename>testsrequired</filename>:</emphasis>
- Used internally.
- The tests do not need these.
- </para></listitem>
- <listitem><para><emphasis><filename>filesdir</filename>:</emphasis>
- The absolute path to
- <filename>meta/lib/oeqa/runtime/files</filename>,
- which contains helper files for tests meant
- for copying on the target such as small
- files written in C for compilation.
- </para></listitem>
- <listitem><para><emphasis><filename>target</filename>:</emphasis>
- The target controller object used to deploy
- and start an image on a particular target
- (e.g. Qemu, SimpleRemote, and
- SystemdbootTarget).
- Tests usually use the following:
- <itemizedlist>
- <listitem><para><emphasis><filename>ip</filename>:</emphasis>
- The target's IP address.
- </para></listitem>
- <listitem><para><emphasis><filename>server_ip</filename>:</emphasis>
- The host's IP address, which is
- usually used by the DNF test
- suite.
- </para></listitem>
- <listitem><para><emphasis><filename>run(cmd, timeout=None)</filename>:</emphasis>
- The single, most used method.
- This command is a wrapper for:
- <filename>ssh root@host "cmd"</filename>.
- The command returns a tuple:
- (status, output), which are what
- their names imply - the return code
- of "cmd" and whatever output
- it produces.
- The optional timeout argument
- represents the number of seconds the
- test should wait for "cmd" to
- return.
- If the argument is "None", the
- test uses the default instance's
- timeout period, which is 300
- seconds.
- If the argument is "0", the test
- runs until the command returns.
- </para></listitem>
- <listitem><para><emphasis><filename>copy_to(localpath, remotepath)</filename>:</emphasis>
- <filename>scp localpath root@ip:remotepath</filename>.
- </para></listitem>
- <listitem><para><emphasis><filename>copy_from(remotepath, localpath)</filename>:</emphasis>
- <filename>scp root@host:remotepath localpath</filename>.
- </para></listitem>
- </itemizedlist></para></listitem>
- </itemizedlist></para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='qemu-image-writing-tests-instance-attributes'>
- <title>Instance Attributes</title>
-
- <para>
- A single instance attribute exists, which is
- <filename>target</filename>.
- The <filename>target</filename> instance attribute is
- identical to the class attribute of the same name, which
- is described in the previous section.
- This attribute exists as both an instance and class
- attribute so tests can use
- <filename>self.target.run(cmd)</filename> in instance
- methods instead of
- <filename>oeRuntimeTest.tc.target.run(cmd)</filename>.
- </para>
- </section>
- </section>
-
- <section id='installing-packages-in-the-dut-without-the-package-manager'>
- <title>Installing Packages in the DUT Without the Package Manager</title>
-
- <para>
- When a test requires a package built by BitBake, it is possible
- to install that package.
- Installing the package does not require a package manager be
- installed in the device under test (DUT).
- It does, however, require an SSH connection and the target must
- be using the <filename>sshcontrol</filename> class.
- <note>
- This method uses <filename>scp</filename> to copy files
- from the host to the target, which causes permissions and
- special attributes to be lost.
- </note>
- </para>
-
- <para>
- A JSON file is used to define the packages needed by a test.
- This file must be in the same path as the file used to define
- the tests.
- Furthermore, the filename must map directly to the test
- module name with a <filename>.json</filename> extension.
- </para>
-
- <para>
- The JSON file must include an object with the test name as
- keys of an object or an array.
- This object (or array of objects) uses the following data:
- <itemizedlist>
- <listitem><para>"pkg" - A mandatory string that is the
- name of the package to be installed.
- </para></listitem>
- <listitem><para>"rm" - An optional boolean, which defaults
- to "false", that specifies to remove the package after
- the test.
- </para></listitem>
- <listitem><para>"extract" - An optional boolean, which
- defaults to "false", that specifies if the package must
- be extracted from the package format.
- When set to "true", the package is not automatically
- installed into the DUT.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- Following is an example JSON file that handles test "foo"
- installing package "bar" and test "foobar" installing
- packages "foo" and "bar".
- Once the test is complete, the packages are removed from the
- DUT.
- <literallayout class='monospaced'>
- {
- "foo": {
- "pkg": "bar"
- },
- "foobar": [
- {
- "pkg": "foo",
- "rm": true
- },
- {
- "pkg": "bar",
- "rm": true
- }
- ]
- }
- </literallayout>
- </para>
- </section>
- </section>
-
- <section id='usingpoky-debugging-tools-and-techniques'>
- <title>Debugging Tools and Techniques</title>
-
- <para>
- The exact method for debugging build failures depends on the nature
- of the problem and on the system's area from which the bug
- originates.
- Standard debugging practices such as comparison against the last
- known working version with examination of the changes and the
- re-application of steps to identify the one causing the problem are
- valid for the Yocto Project just as they are for any other system.
- Even though it is impossible to detail every possible potential
- failure, this section provides some general tips to aid in
- debugging given a variety of situations.
- <note><title>Tip</title>
- A useful feature for debugging is the error reporting tool.
- Configuring the Yocto Project to use this tool causes the
- OpenEmbedded build system to produce error reporting commands as
- part of the console output.
- You can enter the commands after the build completes to log
- error information into a common database, that can help you
- figure out what might be going wrong.
- For information on how to enable and use this feature, see the
- "<link linkend='using-the-error-reporting-tool'>Using the Error Reporting Tool</link>"
- section.
- </note>
- </para>
-
- <para>
- The following list shows the debugging topics in the remainder of
- this section:
- <itemizedlist>
- <listitem><para>
- "<link linkend='dev-debugging-viewing-logs-from-failed-tasks'>Viewing Logs from Failed Tasks</link>"
- describes how to find and view logs from tasks that
- failed during the build process.
- </para></listitem>
- <listitem><para>
- "<link linkend='dev-debugging-viewing-variable-values'>Viewing Variable Values</link>"
- describes how to use the BitBake <filename>-e</filename>
- option to examine variable values after a recipe has been
- parsed.
- </para></listitem>
- <listitem><para>
- "<link linkend='viewing-package-information-with-oe-pkgdata-util'>Viewing Package Information with <filename>oe-pkgdata-util</filename></link>"
- describes how to use the
- <filename>oe-pkgdata-util</filename> utility to query
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PKGDATA_DIR'><filename>PKGDATA_DIR</filename></ulink>
- and display package-related information for built
- packages.
- </para></listitem>
- <listitem><para>
- "<link linkend='dev-viewing-dependencies-between-recipes-and-tasks'>Viewing Dependencies Between Recipes and Tasks</link>"
- describes how to use the BitBake <filename>-g</filename>
- option to display recipe dependency information used
- during the build.
- </para></listitem>
- <listitem><para>
- "<link linkend='dev-viewing-task-variable-dependencies'>Viewing Task Variable Dependencies</link>"
- describes how to use the
- <filename>bitbake-dumpsig</filename> command in
- conjunction with key subdirectories in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
- to determine variable dependencies.
- </para></listitem>
- <listitem><para>
- "<link linkend='dev-debugging-taskrunning'>Running Specific Tasks</link>"
- describes how to use several BitBake options (e.g.
- <filename>-c</filename>, <filename>-C</filename>, and
- <filename>-f</filename>) to run specific tasks in the
- build chain.
- It can be useful to run tasks "out-of-order" when trying
- isolate build issues.
- </para></listitem>
- <listitem><para>
- "<link linkend='dev-debugging-bitbake'>General BitBake Problems</link>"
- describes how to use BitBake's <filename>-D</filename>
- debug output option to reveal more about what BitBake is
- doing during the build.
- </para></listitem>
- <listitem><para>
- "<link linkend='dev-debugging-buildfile'>Building with No Dependencies</link>"
- describes how to use the BitBake <filename>-b</filename>
- option to build a recipe while ignoring dependencies.
- </para></listitem>
- <listitem><para>
- "<link linkend='recipe-logging-mechanisms'>Recipe Logging Mechanisms</link>"
- describes how to use the many recipe logging functions
- to produce debugging output and report errors and warnings.
- </para></listitem>
- <listitem><para>
- "<link linkend='debugging-parallel-make-races'>Debugging Parallel Make Races</link>"
- describes how to debug situations where the build consists
- of several parts that are run simultaneously and when the
- output or result of one part is not ready for use with a
- different part of the build that depends on that output.
- </para></listitem>
- <listitem><para>
- "<link linkend='platdev-gdb-remotedebug'>Debugging With the GNU Project Debugger (GDB) Remotely</link>"
- describes how to use GDB to allow you to examine running
- programs, which can help you fix problems.
- </para></listitem>
- <listitem><para>
- "<link linkend='debugging-with-the-gnu-project-debugger-gdb-on-the-target'>Debugging with the GNU Project Debugger (GDB) on the Target</link>"
- describes how to use GDB directly on target hardware for
- debugging.
- </para></listitem>
- <listitem><para>
- "<link linkend='dev-other-debugging-others'>Other Debugging Tips</link>"
- describes miscellaneous debugging tips that can be useful.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <section id='dev-debugging-viewing-logs-from-failed-tasks'>
- <title>Viewing Logs from Failed Tasks</title>
-
- <para>
- You can find the log for a task in the file
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}/temp/log.do_</filename><replaceable>taskname</replaceable>.
- For example, the log for the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-compile'><filename>do_compile</filename></ulink>
- task of the QEMU minimal image for the x86 machine
- (<filename>qemux86</filename>) might be in
- <filename>tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/temp/log.do_compile</filename>.
- To see the commands
- <ulink url='&YOCTO_DOCS_REF_URL;#bitbake-term'>BitBake</ulink>
- ran to generate a log, look at the corresponding
- <filename>run.do_</filename><replaceable>taskname</replaceable>
- file in the same directory.
- </para>
-
- <para>
- <filename>log.do_</filename><replaceable>taskname</replaceable>
- and
- <filename>run.do_</filename><replaceable>taskname</replaceable>
- are actually symbolic links to
- <filename>log.do_</filename><replaceable>taskname</replaceable><filename>.</filename><replaceable>pid</replaceable>
- and
- <filename>log.run_</filename><replaceable>taskname</replaceable><filename>.</filename><replaceable>pid</replaceable>,
- where <replaceable>pid</replaceable> is the PID the task had
- when it ran.
- The symlinks always point to the files corresponding to the most
- recent run.
- </para>
- </section>
-
- <section id='dev-debugging-viewing-variable-values'>
- <title>Viewing Variable Values</title>
-
- <para>
- Sometimes you need to know the value of a variable as a
- result of BitBake's parsing step.
- This could be because some unexpected behavior occurred
- in your project.
- Perhaps an attempt to
- <ulink url='&YOCTO_DOCS_BB_URL;#modifying-existing-variables'>modify a variable</ulink>
- did not work out as expected.
- </para>
-
- <para>
- BitBake's <filename>-e</filename> option is used to display
- variable values after parsing.
- The following command displays the variable values after the
- configuration files (i.e. <filename>local.conf</filename>,
- <filename>bblayers.conf</filename>,
- <filename>bitbake.conf</filename> and so forth) have been
- parsed:
- <literallayout class='monospaced'>
- $ bitbake -e
- </literallayout>
- The following command displays variable values after a specific
- recipe has been parsed.
- The variables include those from the configuration as well:
- <literallayout class='monospaced'>
- $ bitbake -e recipename
- </literallayout>
- <note><para>
- Each recipe has its own private set of variables
- (datastore).
- Internally, after parsing the configuration, a copy of the
- resulting datastore is made prior to parsing each recipe.
- This copying implies that variables set in one recipe will
- not be visible to other recipes.</para>
-
- <para>Likewise, each task within a recipe gets a private
- datastore based on the recipe datastore, which means that
- variables set within one task will not be visible to
- other tasks.</para>
- </note>
- </para>
-
- <para>
- In the output of <filename>bitbake -e</filename>, each
- variable is preceded by a description of how the variable
- got its value, including temporary values that were later
- overriden.
- This description also includes variable flags (varflags) set on
- the variable.
- The output can be very helpful during debugging.
- </para>
-
- <para>
- Variables that are exported to the environment are preceded by
- <filename>export</filename> in the output of
- <filename>bitbake -e</filename>.
- See the following example:
- <literallayout class='monospaced'>
- export CC="i586-poky-linux-gcc -m32 -march=i586 --sysroot=/home/ulf/poky/build/tmp/sysroots/qemux86"
- </literallayout>
- </para>
-
- <para>
- In addition to variable values, the output of the
- <filename>bitbake -e</filename> and
- <filename>bitbake -e</filename>&nbsp;<replaceable>recipe</replaceable>
- commands includes the following information:
- <itemizedlist>
- <listitem><para>
- The output starts with a tree listing all configuration
- files and classes included globally, recursively listing
- the files they include or inherit in turn.
- Much of the behavior of the OpenEmbedded build system
- (including the behavior of the
- <ulink url='&YOCTO_DOCS_REF_URL;#normal-recipe-build-tasks'>normal recipe build tasks</ulink>)
- is implemented in the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-base'><filename>base</filename></ulink>
- class and the classes it inherits, rather than being
- built into BitBake itself.
- </para></listitem>
- <listitem><para>
- After the variable values, all functions appear in the
- output.
- For shell functions, variables referenced within the
- function body are expanded.
- If a function has been modified using overrides or
- using override-style operators like
- <filename>_append</filename> and
- <filename>_prepend</filename>, then the final assembled
- function body appears in the output.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='viewing-package-information-with-oe-pkgdata-util'>
- <title>Viewing Package Information with <filename>oe-pkgdata-util</filename></title>
-
- <para>
- You can use the <filename>oe-pkgdata-util</filename>
- command-line utility to query
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PKGDATA_DIR'><filename>PKGDATA_DIR</filename></ulink>
- and display various package-related information.
- When you use the utility, you must use it to view information
- on packages that have already been built.
- </para>
-
- <para>
- Following are a few of the available
- <filename>oe-pkgdata-util</filename> subcommands.
- <note>
- You can use the standard * and ? globbing wildcards as part
- of package names and paths.
- </note>
- <itemizedlist>
- <listitem><para>
- <filename>oe-pkgdata-util list-pkgs [</filename><replaceable>pattern</replaceable><filename>]</filename>:
- Lists all packages that have been built, optionally
- limiting the match to packages that match
- <replaceable>pattern</replaceable>.
- </para></listitem>
- <listitem><para>
- <filename>oe-pkgdata-util list-pkg-files&nbsp;</filename><replaceable>package</replaceable><filename>&nbsp;...</filename>:
- Lists the files and directories contained in the given
- packages.
- <note>
- <para>
- A different way to view the contents of a package is
- to look at the
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}/packages-split</filename>
- directory of the recipe that generates the
- package.
- This directory is created by the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-package'><filename>do_package</filename></ulink>
- task and has one subdirectory for each package the
- recipe generates, which contains the files stored in
- that package.</para>
- <para>
- If you want to inspect the
- <filename>${WORKDIR}/packages-split</filename>
- directory, make sure that
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-rm-work'><filename>rm_work</filename></ulink>
- is not enabled when you build the recipe.
- </para>
- </note>
- </para></listitem>
- <listitem><para>
- <filename>oe-pkgdata-util find-path&nbsp;</filename><replaceable>path</replaceable><filename>&nbsp;...</filename>:
- Lists the names of the packages that contain the given
- paths.
- For example, the following tells us that
- <filename>/usr/share/man/man1/make.1</filename>
- is contained in the <filename>make-doc</filename>
- package:
- <literallayout class='monospaced'>
- $ oe-pkgdata-util find-path /usr/share/man/man1/make.1
- make-doc: /usr/share/man/man1/make.1
- </literallayout>
- </para></listitem>
- <listitem><para>
- <filename>oe-pkgdata-util lookup-recipe&nbsp;</filename><replaceable>package</replaceable><filename>&nbsp;...</filename>:
- Lists the name of the recipes that
- produce the given packages.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- For more information on the <filename>oe-pkgdata-util</filename>
- command, use the help facility:
- <literallayout class='monospaced'>
- $ oe-pkgdata-util &dash;&dash;help
- $ oe-pkgdata-util <replaceable>subcommand</replaceable> --help
- </literallayout>
- </para>
- </section>
-
- <section id='dev-viewing-dependencies-between-recipes-and-tasks'>
- <title>Viewing Dependencies Between Recipes and Tasks</title>
-
- <para>
- Sometimes it can be hard to see why BitBake wants to build other
- recipes before the one you have specified.
- Dependency information can help you understand why a recipe is
- built.
- </para>
-
- <para>
- To generate dependency information for a recipe, run the
- following command:
- <literallayout class='monospaced'>
- $ bitbake -g <replaceable>recipename</replaceable>
- </literallayout>
- This command writes the following files in the current
- directory:
- <itemizedlist>
- <listitem><para>
- <filename>pn-buildlist</filename>: A list of
- recipes/targets involved in building
- <replaceable>recipename</replaceable>.
- "Involved" here means that at least one task from the
- recipe needs to run when building
- <replaceable>recipename</replaceable> from scratch.
- Targets that are in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-ASSUME_PROVIDED'><filename>ASSUME_PROVIDED</filename></ulink>
- are not listed.
- </para></listitem>
- <listitem><para>
- <filename>task-depends.dot</filename>: A graph showing
- dependencies between tasks.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- The graphs are in
- <ulink url='https://en.wikipedia.org/wiki/DOT_%28graph_description_language%29'>DOT</ulink>
- format and can be converted to images (e.g. using the
- <filename>dot</filename> tool from
- <ulink url='http://www.graphviz.org/'>Graphviz</ulink>).
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- DOT files use a plain text format.
- The graphs generated using the
- <filename>bitbake -g</filename> command are often so
- large as to be difficult to read without special
- pruning (e.g. with Bitbake's
- <filename>-I</filename> option) and processing.
- Despite the form and size of the graphs, the
- corresponding <filename>.dot</filename> files can
- still be possible to read and provide useful
- information.
- </para>
-
- <para>As an example, the
- <filename>task-depends.dot</filename> file contains
- lines such as the following:
- <literallayout class='monospaced'>
- "libxslt.do_configure" -> "libxml2.do_populate_sysroot"
- </literallayout>
- The above example line reveals that the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-configure'><filename>do_configure</filename></ulink>
- task in <filename>libxslt</filename> depends on the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-populate_sysroot'><filename>do_populate_sysroot</filename></ulink>
- task in <filename>libxml2</filename>, which is a
- normal
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
- dependency between the two recipes.
- </para></listitem>
- <listitem><para>
- For an example of how <filename>.dot</filename>
- files can be processed, see the
- <filename>scripts/contrib/graph-tool</filename>
- Python script, which finds and displays paths
- between graph nodes.
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
-
- <para>
- You can use a different method to view dependency information
- by using the following command:
- <literallayout class='monospaced'>
- $ bitbake -g -u taskexp <replaceable>recipename</replaceable>
- </literallayout>
- This command displays a GUI window from which you can view
- build-time and runtime dependencies for the recipes involved in
- building <replaceable>recipename</replaceable>.
- </para>
- </section>
-
- <section id='dev-viewing-task-variable-dependencies'>
- <title>Viewing Task Variable Dependencies</title>
-
- <para>
- As mentioned in the
- "<ulink url='&YOCTO_DOCS_BB_URL;#checksums'>Checksums (Signatures)</ulink>"
- section of the BitBake User Manual, BitBake tries to
- automatically determine what variables a task depends on so
- that it can rerun the task if any values of the variables
- change.
- This determination is usually reliable.
- However, if you do things like construct variable names at
- runtime, then you might have to manually declare dependencies
- on those variables using <filename>vardeps</filename> as
- described in the
- "<ulink url='&YOCTO_DOCS_BB_URL;#variable-flags'>Variable Flags</ulink>"
- section of the BitBake User Manual.
- </para>
-
- <para>
- If you are unsure whether a variable dependency is being
- picked up automatically for a given task, you can list the
- variable dependencies BitBake has determined by doing the
- following:
- <orderedlist>
- <listitem><para>
- Build the recipe containing the task:
- <literallayout class='monospaced'>
- $ bitbake <replaceable>recipename</replaceable>
- </literallayout>
- </para></listitem>
- <listitem><para>
- Inside the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-STAMPS_DIR'><filename>STAMPS_DIR</filename></ulink>
- directory, find the signature data
- (<filename>sigdata</filename>) file that corresponds
- to the task.
- The <filename>sigdata</filename> files contain a pickled
- Python database of all the metadata that went into
- creating the input checksum for the task.
- As an example, for the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-fetch'><filename>do_fetch</filename></ulink>
- task of the <filename>db</filename> recipe, the
- <filename>sigdata</filename> file might be found in the
- following location:
- <literallayout class='monospaced'>
- ${BUILDDIR}/tmp/stamps/i586-poky-linux/db/6.0.30-r1.do_fetch.sigdata.7c048c18222b16ff0bcee2000ef648b1
- </literallayout>
- For tasks that are accelerated through the shared state
- (<ulink url='&YOCTO_DOCS_OM_URL;#shared-state-cache'>sstate</ulink>)
- cache, an additional <filename>siginfo</filename> file
- is written into
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SSTATE_DIR'><filename>SSTATE_DIR</filename></ulink>
- along with the cached task output.
- The <filename>siginfo</filename> files contain exactly
- the same information as <filename>sigdata</filename>
- files.
- </para></listitem>
- <listitem><para>
- Run <filename>bitbake-dumpsig</filename> on the
- <filename>sigdata</filename> or
- <filename>siginfo</filename> file.
- Here is an example:
- <literallayout class='monospaced'>
- $ bitbake-dumpsig ${BUILDDIR}/tmp/stamps/i586-poky-linux/db/6.0.30-r1.do_fetch.sigdata.7c048c18222b16ff0bcee2000ef648b1
- </literallayout>
- In the output of the above command, you will find a
- line like the following, which lists all the (inferred)
- variable dependencies for the task.
- This list also includes indirect dependencies from
- variables depending on other variables, recursively.
- <literallayout class='monospaced'>
- Task dependencies: ['PV', 'SRCREV', 'SRC_URI', 'SRC_URI[md5sum]', 'SRC_URI[sha256sum]', 'base_do_fetch']
- </literallayout>
- <note>
- Functions (e.g. <filename>base_do_fetch</filename>)
- also count as variable dependencies.
- These functions in turn depend on the variables they
- reference.
- </note>
- The output of <filename>bitbake-dumpsig</filename> also
- includes the value each variable had, a list of
- dependencies for each variable, and
- <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_HASHBASE_WHITELIST'><filename>BB_HASHBASE_WHITELIST</filename></ulink>
- information.
- </para></listitem>
- </orderedlist>
- </para>
-
- <para>
- There is also a <filename>bitbake-diffsigs</filename> command
- for comparing two <filename>siginfo</filename> or
- <filename>sigdata</filename> files.
- This command can be helpful when trying to figure out what
- changed between two versions of a task.
- If you call <filename>bitbake-diffsigs</filename> with just one
- file, the command behaves like
- <filename>bitbake-dumpsig</filename>.
- </para>
-
- <para>
- You can also use BitBake to dump out the signature construction
- information without executing tasks by using either of the
- following BitBake command-line options:
- <literallayout class='monospaced'>
- &dash;&dash;dump-signatures=<replaceable>SIGNATURE_HANDLER</replaceable>
- -S <replaceable>SIGNATURE_HANDLER</replaceable>
- </literallayout>
- <note>
- Two common values for
- <replaceable>SIGNATURE_HANDLER</replaceable> are "none" and
- "printdiff", which dump only the signature or compare the
- dumped signature with the cached one, respectively.
- </note>
- Using BitBake with either of these options causes BitBake to
- dump out <filename>sigdata</filename> files in the
- <filename>stamps</filename> directory for every task it would
- have executed instead of building the specified target package.
- </para>
- </section>
-
- <section id='dev-viewing-metadata-used-to-create-the-input-signature-of-a-shared-state-task'>
- <title>Viewing Metadata Used to Create the Input Signature of a Shared State Task</title>
-
- <para>
- Seeing what metadata went into creating the input signature
- of a shared state (sstate) task can be a useful debugging
- aid.
- This information is available in signature information
- (<filename>siginfo</filename>) files in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SSTATE_DIR'><filename>SSTATE_DIR</filename></ulink>.
- For information on how to view and interpret information in
- <filename>siginfo</filename> files, see the
- "<link linkend='dev-viewing-task-variable-dependencies'>Viewing Task Variable Dependencies</link>"
- section.
- </para>
-
- <para>
- For conceptual information on shared state, see the
- "<ulink url='&YOCTO_DOCS_OM_URL;#shared-state'>Shared State</ulink>"
- section in the Yocto Project Overview and Concepts Manual.
- </para>
- </section>
-
- <section id='dev-invalidating-shared-state-to-force-a-task-to-run'>
- <title>Invalidating Shared State to Force a Task to Run</title>
-
- <para>
- The OpenEmbedded build system uses
- <ulink url='&YOCTO_DOCS_OM_URL;#overview-checksums'>checksums</ulink>
- and
- <ulink url='&YOCTO_DOCS_OM_URL;#shared-state'>shared state</ulink>
- cache to avoid unnecessarily rebuilding tasks.
- Collectively, this scheme is known as "shared state code."
- </para>
-
- <para>
- As with all schemes, this one has some drawbacks.
- It is possible that you could make implicit changes to your
- code that the checksum calculations do not take into
- account.
- These implicit changes affect a task's output but do not
- trigger the shared state code into rebuilding a recipe.
- Consider an example during which a tool changes its output.
- Assume that the output of <filename>rpmdeps</filename>
- changes.
- The result of the change should be that all the
- <filename>package</filename> and
- <filename>package_write_rpm</filename> shared state cache
- items become invalid.
- However, because the change to the output is
- external to the code and therefore implicit,
- the associated shared state cache items do not become
- invalidated.
- In this case, the build process uses the cached items
- rather than running the task again.
- Obviously, these types of implicit changes can cause
- problems.
- </para>
-
- <para>
- To avoid these problems during the build, you need to
- understand the effects of any changes you make.
- Realize that changes you make directly to a function
- are automatically factored into the checksum calculation.
- Thus, these explicit changes invalidate the associated
- area of shared state cache.
- However, you need to be aware of any implicit changes that
- are not obvious changes to the code and could affect
- the output of a given task.
- </para>
-
- <para>
- When you identify an implicit change, you can easily
- take steps to invalidate the cache and force the tasks
- to run.
- The steps you can take are as simple as changing a
- function's comments in the source code.
- For example, to invalidate package shared state files,
- change the comment statements of
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-package'><filename>do_package</filename></ulink>
- or the comments of one of the functions it calls.
- Even though the change is purely cosmetic, it causes the
- checksum to be recalculated and forces the build system to
- run the task again.
- <note>
- For an example of a commit that makes a cosmetic
- change to invalidate shared state, see this
- <ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/commit/meta/classes/package.bbclass?id=737f8bbb4f27b4837047cb9b4fbfe01dfde36d54'>commit</ulink>.
- </note>
- </para>
- </section>
-
- <section id='dev-debugging-taskrunning'>
- <title>Running Specific Tasks</title>
-
- <para>
- Any given recipe consists of a set of tasks.
- The standard BitBake behavior in most cases is:
- <filename>do_fetch</filename>,
- <filename>do_unpack</filename>,
- <filename>do_patch</filename>,
- <filename>do_configure</filename>,
- <filename>do_compile</filename>,
- <filename>do_install</filename>,
- <filename>do_package</filename>,
- <filename>do_package_write_*</filename>, and
- <filename>do_build</filename>.
- The default task is <filename>do_build</filename> and any tasks
- on which it depends build first.
- Some tasks, such as <filename>do_devshell</filename>, are not
- part of the default build chain.
- If you wish to run a task that is not part of the default build
- chain, you can use the <filename>-c</filename> option in
- BitBake.
- Here is an example:
- <literallayout class='monospaced'>
- $ bitbake matchbox-desktop -c devshell
- </literallayout>
- </para>
-
- <para>
- The <filename>-c</filename> option respects task dependencies,
- which means that all other tasks (including tasks from other
- recipes) that the specified task depends on will be run before
- the task.
- Even when you manually specify a task to run with
- <filename>-c</filename>, BitBake will only run the task if it
- considers it "out of date".
- See the
- "<ulink url='&YOCTO_DOCS_OM_URL;#stamp-files-and-the-rerunning-of-tasks'>Stamp Files and the Rerunning of Tasks</ulink>"
- section in the Yocto Project Overview and Concepts Manual for
- how BitBake determines whether a task is "out of date".
- </para>
-
- <para>
- If you want to force an up-to-date task to be rerun (e.g.
- because you made manual modifications to the recipe's
- <ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink>
- that you want to try out), then you can use the
- <filename>-f</filename> option.
- <note>
- The reason <filename>-f</filename> is never required when
- running the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-devshell'><filename>do_devshell</filename></ulink>
- task is because the
- <filename>[</filename><ulink url='&YOCTO_DOCS_BB_URL;#variable-flags'><filename>nostamp</filename></ulink><filename>]</filename>
- variable flag is already set for the task.
- </note>
- The following example shows one way you can use the
- <filename>-f</filename> option:
- <literallayout class='monospaced'>
- $ bitbake matchbox-desktop
- .
- .
- make some changes to the source code in the work directory
- .
- .
- $ bitbake matchbox-desktop -c compile -f
- $ bitbake matchbox-desktop
- </literallayout>
- </para>
-
- <para>
- This sequence first builds and then recompiles
- <filename>matchbox-desktop</filename>.
- The last command reruns all tasks (basically the packaging
- tasks) after the compile.
- BitBake recognizes that the <filename>do_compile</filename>
- task was rerun and therefore understands that the other tasks
- also need to be run again.
- </para>
-
- <para>
- Another, shorter way to rerun a task and all
- <ulink url='&YOCTO_DOCS_REF_URL;#normal-recipe-build-tasks'>normal recipe build tasks</ulink>
- that depend on it is to use the <filename>-C</filename>
- option.
- <note>
- This option is upper-cased and is separate from the
- <filename>-c</filename> option, which is lower-cased.
- </note>
- Using this option invalidates the given task and then runs the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-build'><filename>do_build</filename></ulink>
- task, which is the default task if no task is given, and the
- tasks on which it depends.
- You could replace the final two commands in the previous example
- with the following single command:
- <literallayout class='monospaced'>
- $ bitbake matchbox-desktop -C compile
- </literallayout>
- Internally, the <filename>-f</filename> and
- <filename>-C</filename> options work by tainting (modifying) the
- input checksum of the specified task.
- This tainting indirectly causes the task and its
- dependent tasks to be rerun through the normal task dependency
- mechanisms.
- <note>
- BitBake explicitly keeps track of which tasks have been
- tainted in this fashion, and will print warnings such as the
- following for builds involving such tasks:
- <literallayout class='monospaced'>
- WARNING: /home/ulf/poky/meta/recipes-sato/matchbox-desktop/matchbox-desktop_2.1.bb.do_compile is tainted from a forced run
- </literallayout>
- The purpose of the warning is to let you know that the work
- directory and build output might not be in the clean state
- they would be in for a "normal" build, depending on what
- actions you took.
- To get rid of such warnings, you can remove the work
- directory and rebuild the recipe, as follows:
- <literallayout class='monospaced'>
- $ bitbake matchbox-desktop -c clean
- $ bitbake matchbox-desktop
- </literallayout>
- </note>
- </para>
-
- <para>
- You can view a list of tasks in a given package by running the
- <filename>do_listtasks</filename> task as follows:
- <literallayout class='monospaced'>
- $ bitbake matchbox-desktop -c listtasks
- </literallayout>
- The results appear as output to the console and are also in the
- file <filename>${WORKDIR}/temp/log.do_listtasks</filename>.
- </para>
- </section>
-
- <section id='dev-debugging-bitbake'>
- <title>General BitBake Problems</title>
-
- <para>
- You can see debug output from BitBake by using the
- <filename>-D</filename> option.
- The debug output gives more information about what BitBake
- is doing and the reason behind it.
- Each <filename>-D</filename> option you use increases the
- logging level.
- The most common usage is <filename>-DDD</filename>.
- </para>
-
- <para>
- The output from
- <filename>bitbake -DDD -v</filename> <replaceable>targetname</replaceable>
- can reveal why BitBake chose a certain version of a package or
- why BitBake picked a certain provider.
- This command could also help you in a situation where you think
- BitBake did something unexpected.
- </para>
- </section>
-
- <section id='dev-debugging-buildfile'>
- <title>Building with No Dependencies</title>
-
- <para>
- To build a specific recipe (<filename>.bb</filename> file),
- you can use the following command form:
- <literallayout class='monospaced'>
- $ bitbake -b <replaceable>somepath</replaceable>/<replaceable>somerecipe</replaceable>.bb
- </literallayout>
- This command form does not check for dependencies.
- Consequently, you should use it only when you know existing
- dependencies have been met.
- <note>
- You can also specify fragments of the filename.
- In this case, BitBake checks for a unique match.
- </note>
- </para>
- </section>
-
- <section id='recipe-logging-mechanisms'>
- <title>Recipe Logging Mechanisms</title>
-
- <para>
- The Yocto Project provides several logging functions for
- producing debugging output and reporting errors and warnings.
- For Python functions, the following logging functions exist.
- All of these functions log to
- <filename>${T}/log.do_</filename><replaceable>task</replaceable>,
- and can also log to standard output (stdout) with the right
- settings:
- <itemizedlist>
- <listitem><para>
- <filename>bb.plain(</filename><replaceable>msg</replaceable><filename>)</filename>:
- Writes <replaceable>msg</replaceable> as is to the
- log while also logging to stdout.
- </para></listitem>
- <listitem><para>
- <filename>bb.note(</filename><replaceable>msg</replaceable><filename>)</filename>:
- Writes "NOTE: <replaceable>msg</replaceable>" to the
- log.
- Also logs to stdout if BitBake is called with "-v".
- </para></listitem>
- <listitem><para>
- <filename>bb.debug(</filename><replaceable>level</replaceable><filename>,&nbsp;</filename><replaceable>msg</replaceable><filename>)</filename>:
- Writes "DEBUG: <replaceable>msg</replaceable>" to the
- log.
- Also logs to stdout if the log level is greater than or
- equal to <replaceable>level</replaceable>.
- See the
- "<ulink url='&YOCTO_DOCS_BB_URL;#usage-and-syntax'>-D</ulink>"
- option in the BitBake User Manual for more information.
- </para></listitem>
- <listitem><para>
- <filename>bb.warn(</filename><replaceable>msg</replaceable><filename>)</filename>:
- Writes "WARNING: <replaceable>msg</replaceable>" to the
- log while also logging to stdout.
- </para></listitem>
- <listitem><para>
- <filename>bb.error(</filename><replaceable>msg</replaceable><filename>)</filename>:
- Writes "ERROR: <replaceable>msg</replaceable>" to the
- log while also logging to standard out (stdout).
- <note>
- Calling this function does not cause the task to fail.
- </note>
- </para></listitem>
- <listitem><para>
- <filename>bb.fatal(</filename><replaceable>msg</replaceable><filename>)</filename>:
- This logging function is similar to
- <filename>bb.error(</filename><replaceable>msg</replaceable><filename>)</filename>
- but also causes the calling task to fail.
- <note>
- <filename>bb.fatal()</filename> raises an exception,
- which means you do not need to put a "return"
- statement after the function.
- </note>
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- The same logging functions are also available in shell
- functions, under the names
- <filename>bbplain</filename>, <filename>bbnote</filename>,
- <filename>bbdebug</filename>, <filename>bbwarn</filename>,
- <filename>bberror</filename>, and <filename>bbfatal</filename>.
- The
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-logging'><filename>logging</filename></ulink>
- class implements these functions.
- See that class in the
- <filename>meta/classes</filename> folder of the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- for information.
- </para>
-
- <section id='logging-with-python'>
- <title>Logging With Python</title>
-
- <para>
- When creating recipes using Python and inserting code that
- handles build logs, keep in mind the goal is to have
- informative logs while keeping the console as "silent" as
- possible.
- Also, if you want status messages in the log, use the
- "debug" loglevel.
- </para>
-
- <para>
- Following is an example written in Python.
- The code handles logging for a function that determines the
- number of tasks needed to be run.
- See the
- "<ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-listtasks'><filename>do_listtasks</filename></ulink>"
- section for additional information:
- <literallayout class='monospaced'>
- python do_listtasks() {
- bb.debug(2, "Starting to figure out the task list")
- if noteworthy_condition:
- bb.note("There are 47 tasks to run")
- bb.debug(2, "Got to point xyz")
- if warning_trigger:
- bb.warn("Detected warning_trigger, this might be a problem later.")
- if recoverable_error:
- bb.error("Hit recoverable_error, you really need to fix this!")
- if fatal_error:
- bb.fatal("fatal_error detected, unable to print the task list")
- bb.plain("The tasks present are abc")
- bb.debug(2, "Finished figuring out the tasklist")
- }
- </literallayout>
- </para>
- </section>
-
- <section id='logging-with-bash'>
- <title>Logging With Bash</title>
-
- <para>
- When creating recipes using Bash and inserting code that
- handles build logs, you have the same goals - informative
- with minimal console output.
- The syntax you use for recipes written in Bash is similar
- to that of recipes written in Python described in the
- previous section.
- </para>
-
- <para>
- Following is an example written in Bash.
- The code logs the progress of the <filename>do_my_function</filename> function.
- <literallayout class='monospaced'>
- do_my_function() {
- bbdebug 2 "Running do_my_function"
- if [ exceptional_condition ]; then
- bbnote "Hit exceptional_condition"
- fi
- bbdebug 2 "Got to point xyz"
- if [ warning_trigger ]; then
- bbwarn "Detected warning_trigger, this might cause a problem later."
- fi
- if [ recoverable_error ]; then
- bberror "Hit recoverable_error, correcting"
- fi
- if [ fatal_error ]; then
- bbfatal "fatal_error detected"
- fi
- bbdebug 2 "Completed do_my_function"
- }
- </literallayout>
- </para>
- </section>
- </section>
-
- <section id='debugging-parallel-make-races'>
- <title>Debugging Parallel Make Races</title>
-
- <para>
- A parallel <filename>make</filename> race occurs when the build
- consists of several parts that are run simultaneously and
- a situation occurs when the output or result of one
- part is not ready for use with a different part of the build
- that depends on that output.
- Parallel make races are annoying and can sometimes be difficult
- to reproduce and fix.
- However, some simple tips and tricks exist that can help
- you debug and fix them.
- This section presents a real-world example of an error
- encountered on the Yocto Project autobuilder and the process
- used to fix it.
- <note>
- If you cannot properly fix a <filename>make</filename> race
- condition, you can work around it by clearing either the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PARALLEL_MAKE'><filename>PARALLEL_MAKE</filename></ulink>
- or
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PARALLEL_MAKEINST'><filename>PARALLEL_MAKEINST</filename></ulink>
- variables.
- </note>
- </para>
-
- <section id='the-failure'>
- <title>The Failure</title>
-
- <para>
- For this example, assume that you are building an image that
- depends on the "neard" package.
- And, during the build, BitBake runs into problems and
- creates the following output.
- <note>
- This example log file has longer lines artificially
- broken to make the listing easier to read.
- </note>
- If you examine the output or the log file, you see the
- failure during <filename>make</filename>:
- <literallayout class='monospaced'>
- | DEBUG: SITE files ['endian-little', 'bit-32', 'ix86-common', 'common-linux', 'common-glibc', 'i586-linux', 'common']
- | DEBUG: Executing shell function do_compile
- | NOTE: make -j 16
- | make --no-print-directory all-am
- | /bin/mkdir -p include/near
- | /bin/mkdir -p include/near
- | /bin/mkdir -p include/near
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/types.h include/near/types.h
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/log.h include/near/log.h
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/plugin.h include/near/plugin.h
- | /bin/mkdir -p include/near
- | /bin/mkdir -p include/near
- | /bin/mkdir -p include/near
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/tag.h include/near/tag.h
- | /bin/mkdir -p include/near
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/adapter.h include/near/adapter.h
- | /bin/mkdir -p include/near
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/ndef.h include/near/ndef.h
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/tlv.h include/near/tlv.h
- | /bin/mkdir -p include/near
- | /bin/mkdir -p include/near
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/setting.h include/near/setting.h
- | /bin/mkdir -p include/near
- | /bin/mkdir -p include/near
- | /bin/mkdir -p include/near
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/device.h include/near/device.h
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/nfc_copy.h include/near/nfc_copy.h
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/snep.h include/near/snep.h
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/version.h include/near/version.h
- | ln -s /home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/
- 0.14-r0/neard-0.14/include/dbus.h include/near/dbus.h
- | ./src/genbuiltin nfctype1 nfctype2 nfctype3 nfctype4 p2p > src/builtin.h
- | i586-poky-linux-gcc -m32 -march=i586 --sysroot=/home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/
- build/build/tmp/sysroots/qemux86 -DHAVE_CONFIG_H -I. -I./include -I./src -I./gdbus -I/home/pokybuild/
- yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/sysroots/qemux86/usr/include/glib-2.0
- -I/home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/tmp/sysroots/qemux86/usr/
- lib/glib-2.0/include -I/home/pokybuild/yocto-autobuilder/yocto-slave/nightly-x86/build/build/
- tmp/sysroots/qemux86/usr/include/dbus-1.0 -I/home/pokybuild/yocto-autobuilder/yocto-slave/
- nightly-x86/build/build/tmp/sysroots/qemux86/usr/lib/dbus-1.0/include -I/home/pokybuild/yocto-autobuilder/
- yocto-slave/nightly-x86/build/build/tmp/sysroots/qemux86/usr/include/libnl3
- -DNEAR_PLUGIN_BUILTIN -DPLUGINDIR=\""/usr/lib/near/plugins"\"
- -DCONFIGDIR=\""/etc/neard\"" -O2 -pipe -g -feliminate-unused-debug-types -c
- -o tools/snep-send.o tools/snep-send.c
- | In file included from tools/snep-send.c:16:0:
- | tools/../src/near.h:41:23: fatal error: near/dbus.h: No such file or directory
- | #include &lt;near/dbus.h&gt;
- | ^
- | compilation terminated.
- | make[1]: *** [tools/snep-send.o] Error 1
- | make[1]: *** Waiting for unfinished jobs....
- | make: *** [all] Error 2
- | ERROR: oe_runmake failed
- </literallayout>
- </para>
- </section>
-
- <section id='reproducing-the-error'>
- <title>Reproducing the Error</title>
-
- <para>
- Because race conditions are intermittent, they do not
- manifest themselves every time you do the build.
- In fact, most times the build will complete without problems
- even though the potential race condition exists.
- Thus, once the error surfaces, you need a way to reproduce
- it.
- </para>
-
- <para>
- In this example, compiling the "neard" package is causing
- the problem.
- So the first thing to do is build "neard" locally.
- Before you start the build, set the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-PARALLEL_MAKE'><filename>PARALLEL_MAKE</filename></ulink>
- variable in your <filename>local.conf</filename> file to
- a high number (e.g. "-j 20").
- Using a high value for <filename>PARALLEL_MAKE</filename>
- increases the chances of the race condition showing up:
- <literallayout class='monospaced'>
- $ bitbake neard
- </literallayout>
- </para>
-
- <para>
- Once the local build for "neard" completes, start a
- <filename>devshell</filename> build:
- <literallayout class='monospaced'>
- $ bitbake neard -c devshell
- </literallayout>
- For information on how to use a
- <filename>devshell</filename>, see the
- "<link linkend='platdev-appdev-devshell'>Using a Development Shell</link>"
- section.
- </para>
-
- <para>
- In the <filename>devshell</filename>, do the following:
- <literallayout class='monospaced'>
- $ make clean
- $ make tools/snep-send.o
- </literallayout>
- The <filename>devshell</filename> commands cause the failure
- to clearly be visible.
- In this case, a missing dependency exists for the "neard"
- Makefile target.
- Here is some abbreviated, sample output with the
- missing dependency clearly visible at the end:
- <literallayout class='monospaced'>
- i586-poky-linux-gcc -m32 -march=i586 --sysroot=/home/scott-lenovo/......
- .
- .
- .
- tools/snep-send.c
- In file included from tools/snep-send.c:16:0:
- tools/../src/near.h:41:23: fatal error: near/dbus.h: No such file or directory
- #include &lt;near/dbus.h&gt;
- ^
- compilation terminated.
- make: *** [tools/snep-send.o] Error 1
- $
- </literallayout>
- </para>
- </section>
-
- <section id='creating-a-patch-for-the-fix'>
- <title>Creating a Patch for the Fix</title>
-
- <para>
- Because there is a missing dependency for the Makefile
- target, you need to patch the
- <filename>Makefile.am</filename> file, which is generated
- from <filename>Makefile.in</filename>.
- You can use Quilt to create the patch:
- <literallayout class='monospaced'>
- $ quilt new parallelmake.patch
- Patch patches/parallelmake.patch is now on top
- $ quilt add Makefile.am
- File Makefile.am added to patch patches/parallelmake.patch
- </literallayout>
- For more information on using Quilt, see the
- "<link linkend='using-a-quilt-workflow'>Using Quilt in Your Workflow</link>"
- section.
- </para>
-
- <para>
- At this point you need to make the edits to
- <filename>Makefile.am</filename> to add the missing
- dependency.
- For our example, you have to add the following line
- to the file:
- <literallayout class='monospaced'>
- tools/snep-send.$(OBJEXT): include/near/dbus.h
- </literallayout>
- </para>
-
- <para>
- Once you have edited the file, use the
- <filename>refresh</filename> command to create the patch:
- <literallayout class='monospaced'>
- $ quilt refresh
- Refreshed patch patches/parallelmake.patch
- </literallayout>
- Once the patch file exists, you need to add it back to the
- originating recipe folder.
- Here is an example assuming a top-level
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- named <filename>poky</filename>:
- <literallayout class='monospaced'>
- $ cp patches/parallelmake.patch poky/meta/recipes-connectivity/neard/neard
- </literallayout>
- The final thing you need to do to implement the fix in the
- build is to update the "neard" recipe (i.e.
- <filename>neard-0.14.bb</filename>) so that the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
- statement includes the patch file.
- The recipe file is in the folder above the patch.
- Here is what the edited <filename>SRC_URI</filename>
- statement would look like:
- <literallayout class='monospaced'>
- SRC_URI = "${KERNELORG_MIRROR}/linux/network/nfc/${BPN}-${PV}.tar.xz \
- file://neard.in \
- file://neard.service.in \
- file://parallelmake.patch \
- "
- </literallayout>
- </para>
-
- <para>
- With the patch complete and moved to the correct folder and
- the <filename>SRC_URI</filename> statement updated, you can
- exit the <filename>devshell</filename>:
- <literallayout class='monospaced'>
- $ exit
- </literallayout>
- </para>
- </section>
-
- <section id='testing-the-build'>
- <title>Testing the Build</title>
-
- <para>
- With everything in place, you can get back to trying the
- build again locally:
- <literallayout class='monospaced'>
- $ bitbake neard
- </literallayout>
- This build should succeed.
- </para>
-
- <para>
- Now you can open up a <filename>devshell</filename> again
- and repeat the clean and make operations as follows:
- <literallayout class='monospaced'>
- $ bitbake neard -c devshell
- $ make clean
- $ make tools/snep-send.o
- </literallayout>
- The build should work without issue.
- </para>
-
- <para>
- As with all solved problems, if they originated upstream,
- you need to submit the fix for the recipe in OE-Core and
- upstream so that the problem is taken care of at its
- source.
- See the
- "<link linkend='how-to-submit-a-change'>Submitting a Change to the Yocto Project</link>"
- section for more information.
- </para>
- </section>
- </section>
-
- <section id="platdev-gdb-remotedebug">
- <title>Debugging With the GNU Project Debugger (GDB) Remotely</title>
-
- <para>
- GDB allows you to examine running programs, which in turn helps
- you to understand and fix problems.
- It also allows you to perform post-mortem style analysis of
- program crashes.
- GDB is available as a package within the Yocto Project and is
- installed in SDK images by default.
- See the
- "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>"
- chapter in the Yocto Project Reference Manual for a description of
- these images.
- You can find information on GDB at
- <ulink url="http://sourceware.org/gdb/"/>.
- <note><title>Tip</title>
- For best results, install debug (<filename>-dbg</filename>)
- packages for the applications you are going to debug.
- Doing so makes extra debug symbols available that give you
- more meaningful output.
- </note>
- </para>
-
- <para>
- Sometimes, due to memory or disk space constraints, it is not
- possible to use GDB directly on the remote target to debug
- applications.
- These constraints arise because GDB needs to load the debugging
- information and the binaries of the process being debugged.
- Additionally, GDB needs to perform many computations to locate
- information such as function names, variable names and values,
- stack traces and so forth - even before starting the debugging
- process.
- These extra computations place more load on the target system
- and can alter the characteristics of the program being debugged.
- </para>
-
- <para>
- To help get past the previously mentioned constraints, you can
- use gdbserver, which runs on the remote target and does not
- load any debugging information from the debugged process.
- Instead, a GDB instance processes the debugging information that
- is run on a remote computer - the host GDB.
- The host GDB then sends control commands to gdbserver to make
- it stop or start the debugged program, as well as read or write
- memory regions of that debugged program.
- All the debugging information loaded and processed as well
- as all the heavy debugging is done by the host GDB.
- Offloading these processes gives the gdbserver running on the
- target a chance to remain small and fast.
- </para>
-
- <para>
- Because the host GDB is responsible for loading the debugging
- information and for doing the necessary processing to make
- actual debugging happen, you have to make sure the host can
- access the unstripped binaries complete with their debugging
- information and also be sure the target is compiled with no
- optimizations.
- The host GDB must also have local access to all the libraries
- used by the debugged program.
- Because gdbserver does not need any local debugging information,
- the binaries on the remote target can remain stripped.
- However, the binaries must also be compiled without optimization
- so they match the host's binaries.
- </para>
-
- <para>
- To remain consistent with GDB documentation and terminology,
- the binary being debugged on the remote target machine is
- referred to as the "inferior" binary.
- For documentation on GDB see the
- <ulink url="http://sourceware.org/gdb/documentation/">GDB site</ulink>.
- </para>
-
- <para>
- The following steps show you how to debug using the GNU project
- debugger.
- <orderedlist>
- <listitem><para>
- <emphasis>Configure your build system to construct the
- companion debug filesystem:</emphasis></para>
-
- <para>In your <filename>local.conf</filename> file, set
- the following:
- <literallayout class='monospaced'>
- IMAGE_GEN_DEBUGFS = "1"
- IMAGE_FSTYPES_DEBUGFS = "tar.bz2"
- </literallayout>
- These options cause the OpenEmbedded build system
- to generate a special companion filesystem fragment,
- which contains the matching source and debug symbols to
- your deployable filesystem.
- The build system does this by looking at what is in the
- deployed filesystem, and pulling the corresponding
- <filename>-dbg</filename> packages.</para>
-
- <para>The companion debug filesystem is not a complete
- filesystem, but only contains the debug fragments.
- This filesystem must be combined with the full filesystem
- for debugging.
- Subsequent steps in this procedure show how to combine
- the partial filesystem with the full filesystem.
- </para></listitem>
- <listitem><para>
- <emphasis>Configure the system to include gdbserver in
- the target filesystem:</emphasis></para>
-
- <para>Make the following addition in either your
- <filename>local.conf</filename> file or in an image
- recipe:
- <literallayout class='monospaced'>
- IMAGE_INSTALL_append = “ gdbserver"
- </literallayout>
- The change makes sure the <filename>gdbserver</filename>
- package is included.
- </para></listitem>
- <listitem><para>
- <emphasis>Build the environment:</emphasis></para>
-
- <para>Use the following command to construct the image
- and the companion Debug Filesystem:
- <literallayout class='monospaced'>
- $ bitbake <replaceable>image</replaceable>
- </literallayout>
- Build the cross GDB component and make it available
- for debugging.
- Build the SDK that matches the image.
- Building the SDK is best for a production build
- that can be used later for debugging, especially
- during long term maintenance:
- <literallayout class='monospaced'>
- $ bitbake -c populate_sdk <replaceable>image</replaceable>
- </literallayout></para>
-
- <para>Alternatively, you can build the minimal
- toolchain components that match the target.
- Doing so creates a smaller than typical SDK and only
- contains a minimal set of components with which to
- build simple test applications, as well as run the
- debugger:
- <literallayout class='monospaced'>
- $ bitbake meta-toolchain
- </literallayout></para>
-
- <para>A final method is to build Gdb itself within
- the build system:
- <literallayout class='monospaced'>
- $ bitbake gdb-cross-<replaceable>architecture</replaceable>
- </literallayout>
- Doing so produces a temporary copy of
- <filename>cross-gdb</filename> you can use for
- debugging during development.
- While this is the quickest approach, the two previous
- methods in this step are better when considering
- long-term maintenance strategies.
- <note>
- If you run
- <filename>bitbake gdb-cross</filename>, the
- OpenEmbedded build system suggests the actual
- image (e.g. <filename>gdb-cross-i586</filename>).
- The suggestion is usually the actual name you want
- to use.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Set up the</emphasis>&nbsp;<filename>debugfs</filename></para>
-
- <para>Run the following commands to set up the
- <filename>debugfs</filename>:
- <literallayout class='monospaced'>
- $ mkdir debugfs
- $ cd debugfs
- $ tar xvfj <replaceable>build-dir</replaceable>/tmp-glibc/deploy/images/<replaceable>machine</replaceable>/<replaceable>image</replaceable>.rootfs.tar.bz2
- $ tar xvfj <replaceable>build-dir</replaceable>/tmp-glibc/deploy/images/<replaceable>machine</replaceable>/<replaceable>image</replaceable>-dbg.rootfs.tar.bz2
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Set up GDB</emphasis></para>
-
- <para>Install the SDK (if you built one) and then
- source the correct environment file.
- Sourcing the environment file puts the SDK in your
- <filename>PATH</filename> environment variable.</para>
-
- <para>If you are using the build system, Gdb is
- located in
- <replaceable>build-dir</replaceable>/tmp/sysroots/<replaceable>host</replaceable>/usr/bin/<replaceable>architecture</replaceable>/<replaceable>architecture</replaceable>-gdb
- </para></listitem>
- <listitem><para>
- <emphasis>Boot the target:</emphasis></para>
-
- <para>For information on how to run QEMU, see the
- <ulink url='http://wiki.qemu.org/Documentation/GettingStartedDevelopers'>QEMU Documentation</ulink>.
- <note>
- Be sure to verify that your host can access the
- target via TCP.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Debug a program:</emphasis></para>
-
- <para>Debugging a program involves running gdbserver
- on the target and then running Gdb on the host.
- The example in this step debugs
- <filename>gzip</filename>:
- <literallayout class='monospaced'>
- root@qemux86:~# gdbserver localhost:1234 /bin/gzip —help
- </literallayout>
- For additional gdbserver options, see the
- <ulink url='https://www.gnu.org/software/gdb/documentation/'>GDB Server Documentation</ulink>.
- </para>
-
- <para>After running gdbserver on the target, you need
- to run Gdb on the host and configure it and connect to
- the target.
- Use these commands:
- <literallayout class='monospaced'>
- $ cd <replaceable>directory-holding-the-debugfs-directory</replaceable>
- $ <replaceable>arch</replaceable>-gdb
-
- (gdb) set sysroot debugfs
- (gdb) set substitute-path /usr/src/debug debugfs/usr/src/debug
- (gdb) target remote <replaceable>IP-of-target</replaceable>:1234
- </literallayout>
- At this point, everything should automatically load
- (i.e. matching binaries, symbols and headers).
- <note>
- The Gdb <filename>set</filename> commands in the
- previous example can be placed into the users
- <filename>~/.gdbinit</filename> file.
- Upon starting, Gdb automatically runs whatever
- commands are in that file.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Deploying without a full image
- rebuild:</emphasis></para>
-
- <para>In many cases, during development you want a
- quick method to deploy a new binary to the target and
- debug it, without waiting for a full image build.
- </para>
-
- <para>One approach to solving this situation is to
- just build the component you want to debug.
- Once you have built the component, copy the
- executable directly to both the target and the
- host <filename>debugfs</filename>.</para>
-
- <para>If the binary is processed through the debug
- splitting in OpenEmbedded, you should also
- copy the debug items (i.e. <filename>.debug</filename>
- contents and corresponding
- <filename>/usr/src/debug</filename> files)
- from the work directory.
- Here is an example:
- <literallayout class='monospaced'>
- $ bitbake bash
- $ bitbake -c devshell bash
- $ cd ..
- $ scp packages-split/bash/bin/bash <replaceable>target</replaceable>:/bin/bash
- $ cp -a packages-split/bash-dbg/* <replaceable>path</replaceable>/debugfs
- </literallayout>
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='debugging-with-the-gnu-project-debugger-gdb-on-the-target'>
- <title>Debugging with the GNU Project Debugger (GDB) on the Target</title>
-
- <para>
- The previous section addressed using GDB remotely for debugging
- purposes, which is the most usual case due to the inherent
- hardware limitations on many embedded devices.
- However, debugging in the target hardware itself is also
- possible with more powerful devices.
- This section describes what you need to do in order to support
- using GDB to debug on the target hardware.
- </para>
-
- <para>
- To support this kind of debugging, you need do the following:
- <itemizedlist>
- <listitem><para>
- Ensure that GDB is on the target.
- You can do this by adding "gdb" to
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'><filename>IMAGE_INSTALL</filename></ulink>:
- <literallayout class='monospaced'>
- IMAGE_INSTALL_append = " gdb"
- </literallayout>
- Alternatively, you can add "tools-debug" to
- <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FEATURES'><filename>IMAGE_FEATURES</filename></ulink>:
- <literallayout class='monospaced'>
- IMAGE_FEATURES_append = " tools-debug"
- </literallayout>
- </para></listitem>
- <listitem><para>
- Ensure that debug symbols are present.
- You can make sure these symbols are present by
- installing <filename>-dbg</filename>:
- <literallayout class='monospaced'>
- IMAGE_INSTALL_append = " <replaceable>packagename</replaceable>-dbg"
- </literallayout>
- Alternatively, you can do the following to include all
- the debug symbols:
- <literallayout class='monospaced'>
- IMAGE_FEATURES_append = " dbg-pkgs"
- </literallayout>
- </para></listitem>
- </itemizedlist>
- <note>
- To improve the debug information accuracy, you can reduce
- the level of optimization used by the compiler.
- For example, when adding the following line to your
- <filename>local.conf</filename> file, you will reduce
- optimization from
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FULL_OPTIMIZATION'><filename>FULL_OPTIMIZATION</filename></ulink>
- of "-O2" to
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEBUG_OPTIMIZATION'><filename>DEBUG_OPTIMIZATION</filename></ulink>
- of "-O -fno-omit-frame-pointer":
- <literallayout class='monospaced'>
- DEBUG_BUILD = "1"
- </literallayout>
- Consider that this will reduce the application's performance
- and is recommended only for debugging purposes.
- </note>
- </para>
- </section>
-
- <section id='dev-other-debugging-others'>
- <title>Other Debugging Tips</title>
-
- <para>
- Here are some other tips that you might find useful:
- <itemizedlist>
- <listitem><para>
- When adding new packages, it is worth watching for
- undesirable items making their way into compiler command
- lines.
- For example, you do not want references to local system
- files like
- <filename>/usr/lib/</filename> or
- <filename>/usr/include/</filename>.
- </para></listitem>
- <listitem><para>
- If you want to remove the <filename>psplash</filename>
- boot splashscreen,
- add <filename>psplash=false</filename> to the kernel
- command line.
- Doing so prevents <filename>psplash</filename> from
- loading and thus allows you to see the console.
- It is also possible to switch out of the splashscreen by
- switching the virtual console (e.g. Fn+Left or Fn+Right
- on a Zaurus).
- </para></listitem>
- <listitem><para>
- Removing
- <ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>
- (usually <filename>tmp/</filename>, within the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>)
- can often fix temporary build issues.
- Removing <filename>TMPDIR</filename> is usually a
- relatively cheap operation, because task output will be
- cached in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SSTATE_DIR'><filename>SSTATE_DIR</filename></ulink>
- (usually <filename>sstate-cache/</filename>, which is
- also in the Build Directory).
- <note>
- Removing <filename>TMPDIR</filename> might be a
- workaround rather than a fix.
- Consequently, trying to determine the underlying
- cause of an issue before removing the directory is
- a good idea.
- </note>
- </para></listitem>
- <listitem><para>
- Understanding how a feature is used in practice within
- existing recipes can be very helpful.
- It is recommended that you configure some method that
- allows you to quickly search through files.</para>
-
- <para>Using GNU Grep, you can use the following shell
- function to recursively search through common
- recipe-related files, skipping binary files,
- <filename>.git</filename> directories, and the
- Build Directory (assuming its name starts with
- "build"):
- <literallayout class='monospaced'>
- g() {
- grep -Ir \
- --exclude-dir=.git \
- --exclude-dir='build*' \
- --include='*.bb*' \
- --include='*.inc*' \
- --include='*.conf*' \
- --include='*.py*' \
- "$@"
- }
- </literallayout>
- Following are some usage examples:
- <literallayout class='monospaced'>
- $ g FOO # Search recursively for "FOO"
- $ g -i foo # Search recursively for "foo", ignoring case
- $ g -w FOO # Search recursively for "FOO" as a word, ignoring e.g. "FOOBAR"
- </literallayout>
- If figuring out how some feature works requires a lot of
- searching, it might indicate that the documentation
- should be extended or improved.
- In such cases, consider filing a documentation bug using
- the Yocto Project implementation of
- <ulink url='https://bugzilla.yoctoproject.org/'>Bugzilla</ulink>.
- For information on how to submit a bug against
- the Yocto Project, see the Yocto Project Bugzilla
- <ulink url='&YOCTO_WIKI_URL;/wiki/Bugzilla_Configuration_and_Bug_Tracking'>wiki page</ulink>
- and the
- "<link linkend='submitting-a-defect-against-the-yocto-project'>Submitting a Defect Against the Yocto Project</link>"
- section.
- <note>
- The manuals might not be the right place to document
- variables that are purely internal and have a
- limited scope (e.g. internal variables used to
- implement a single <filename>.bbclass</filename>
- file).
- </note>
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
- </section>
-
- <section id='making-changes-to-the-yocto-project'>
- <title>Making Changes to the Yocto Project</title>
-
- <para>
- Because the Yocto Project is an open-source, community-based
- project, you can effect changes to the project.
- This section presents procedures that show you how to submit
- a defect against the project and how to submit a change.
- </para>
-
- <section id='submitting-a-defect-against-the-yocto-project'>
- <title>Submitting a Defect Against the Yocto Project</title>
-
- <para>
- Use the Yocto Project implementation of
- <ulink url='http://www.bugzilla.org/about/'>Bugzilla</ulink>
- to submit a defect (bug) against the Yocto Project.
- For additional information on this implementation of Bugzilla see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#resources-bugtracker'>Yocto Project Bugzilla</ulink>"
- section in the Yocto Project Reference Manual.
- For more detail on any of the following steps, see the Yocto Project
- <ulink url='&YOCTO_WIKI_URL;/wiki/Bugzilla_Configuration_and_Bug_Tracking'>Bugzilla wiki page</ulink>.
- </para>
-
- <para>
- Use the following general steps to submit a bug"
-
- <orderedlist>
- <listitem><para>
- Open the Yocto Project implementation of
- <ulink url='&YOCTO_BUGZILLA_URL;'>Bugzilla</ulink>.
- </para></listitem>
- <listitem><para>
- Click "File a Bug" to enter a new bug.
- </para></listitem>
- <listitem><para>
- Choose the appropriate "Classification", "Product", and
- "Component" for which the bug was found.
- Bugs for the Yocto Project fall into one of several
- classifications, which in turn break down into several
- products and components.
- For example, for a bug against the
- <filename>meta-intel</filename> layer, you would choose
- "Build System, Metadata &amp; Runtime", "BSPs", and
- "bsps-meta-intel", respectively.
- </para></listitem>
- <listitem><para>
- Choose the "Version" of the Yocto Project for which you found
- the bug (e.g. &DISTRO;).
- </para></listitem>
- <listitem><para>
- Determine and select the "Severity" of the bug.
- The severity indicates how the bug impacted your work.
- </para></listitem>
- <listitem><para>
- Choose the "Hardware" that the bug impacts.
- </para></listitem>
- <listitem><para>
- Choose the "Architecture" that the bug impacts.
- </para></listitem>
- <listitem><para>
- Choose a "Documentation change" item for the bug.
- Fixing a bug might or might not affect the Yocto Project
- documentation.
- If you are unsure of the impact to the documentation, select
- "Don't Know".
- </para></listitem>
- <listitem><para>
- Provide a brief "Summary" of the bug.
- Try to limit your summary to just a line or two and be sure
- to capture the essence of the bug.
- </para></listitem>
- <listitem><para>
- Provide a detailed "Description" of the bug.
- You should provide as much detail as you can about the context,
- behavior, output, and so forth that surrounds the bug.
- You can even attach supporting files for output from logs by
- using the "Add an attachment" button.
- </para></listitem>
- <listitem><para>
- Click the "Submit Bug" button submit the bug.
- A new Bugzilla number is assigned to the bug and the defect
- is logged in the bug tracking system.
- </para></listitem>
- </orderedlist>
- Once you file a bug, the bug is processed by the Yocto Project Bug
- Triage Team and further details concerning the bug are assigned
- (e.g. priority and owner).
- You are the "Submitter" of the bug and any further categorization,
- progress, or comments on the bug result in Bugzilla sending you an
- automated email concerning the particular change or progress to the
- bug.
- </para>
- </section>
-
- <section id='how-to-submit-a-change'>
- <title>Submitting a Change to the Yocto Project</title>
-
- <para>
- Contributions to the Yocto Project and OpenEmbedded are very welcome.
- Because the system is extremely configurable and flexible, we recognize
- that developers will want to extend, configure or optimize it for
- their specific uses.
- </para>
-
- <para>
- The Yocto Project uses a mailing list and a patch-based workflow
- that is similar to the Linux kernel but contains important
- differences.
- In general, a mailing list exists through which you can submit
- patches.
- You should send patches to the appropriate mailing list so that they
- can be reviewed and merged by the appropriate maintainer.
- The specific mailing list you need to use depends on the
- location of the code you are changing.
- Each component (e.g. layer) should have a
- <filename>README</filename> file that indicates where to send
- the changes and which process to follow.
- </para>
-
- <para>
- You can send the patch to the mailing list using whichever approach
- you feel comfortable with to generate the patch.
- Once sent, the patch is usually reviewed by the community at large.
- If somebody has concerns with the patch, they will usually voice
- their concern over the mailing list.
- If a patch does not receive any negative reviews, the maintainer of
- the affected layer typically takes the patch, tests it, and then
- based on successful testing, merges the patch.
- </para>
-
- <para id='figuring-out-the-mailing-list-to-use'>
- The "poky" repository, which is the Yocto Project's reference build
- environment, is a hybrid repository that contains several
- individual pieces (e.g. BitBake, Metadata, documentation,
- and so forth) built using the combo-layer tool.
- The upstream location used for submitting changes varies by
- component:
- <itemizedlist>
- <listitem><para>
- <emphasis>Core Metadata:</emphasis>
- Send your patch to the
- <ulink url='http://lists.openembedded.org/mailman/listinfo/openembedded-core'>openembedded-core</ulink>
- mailing list. For example, a change to anything under
- the <filename>meta</filename> or
- <filename>scripts</filename> directories should be sent
- to this mailing list.
- </para></listitem>
- <listitem><para>
- <emphasis>BitBake:</emphasis>
- For changes to BitBake (i.e. anything under the
- <filename>bitbake</filename> directory), send your patch
- to the
- <ulink url='http://lists.openembedded.org/mailman/listinfo/bitbake-devel'>bitbake-devel</ulink>
- mailing list.
- </para></listitem>
- <listitem><para>
- <emphasis>"meta-*" trees:</emphasis>
- These trees contain Metadata.
- Use the
- <ulink url='https://lists.yoctoproject.org/listinfo/poky'>poky</ulink>
- mailing list.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- For changes to other layers hosted in the Yocto Project source
- repositories (i.e. <filename>yoctoproject.org</filename>), tools,
- and the Yocto Project documentation, use the
- <ulink url='https://lists.yoctoproject.org/listinfo/yocto'>Yocto Project</ulink>
- general mailing list.
- <note>
- Sometimes a layer's documentation specifies to use a
- particular mailing list.
- If so, use that list.
- </note>
- For additional recipes that do not fit into the core Metadata, you
- should determine which layer the recipe should go into and submit
- the change in the manner recommended by the documentation (e.g.
- the <filename>README</filename> file) supplied with the layer.
- If in doubt, please ask on the Yocto general mailing list or on
- the openembedded-devel mailing list.
- </para>
-
- <para>
- You can also push a change upstream and request a maintainer to
- pull the change into the component's upstream repository.
- You do this by pushing to a contribution repository that is upstream.
- See the
- "<ulink url='&YOCTO_DOCS_OM_URL;#gs-git-workflows-and-the-yocto-project'>Git Workflows and the Yocto Project</ulink>"
- section in the Yocto Project Overview and Concepts Manual for additional
- concepts on working in the Yocto Project development environment.
- </para>
-
- <para>
- Two commonly used testing repositories exist for
- OpenEmbedded-Core:
- <itemizedlist>
- <listitem><para>
- <emphasis>"ross/mut" branch:</emphasis>
- The "mut" (master-under-test) tree
- exists in the <filename>poky-contrib</filename> repository
- in the
- <ulink url='&YOCTO_GIT_URL;'>Yocto Project source repositories</ulink>.
- </para></listitem>
- <listitem><para>
- <emphasis>"master-next" branch:</emphasis>
- This branch is part of the main
- "poky" repository in the Yocto Project source repositories.
- </para></listitem>
- </itemizedlist>
- Maintainers use these branches to test submissions prior to merging
- patches.
- Thus, you can get an idea of the status of a patch based on
- whether the patch has been merged into one of these branches.
- <note>
- This system is imperfect and changes can sometimes get lost in the
- flow.
- Asking about the status of a patch or change is reasonable if the
- change has been idle for a while with no feedback.
- The Yocto Project does have plans to use
- <ulink url='https://en.wikipedia.org/wiki/Patchwork_(software)'>Patchwork</ulink>
- to track the status of patches and also to automatically preview
- patches.
- </note>
- </para>
-
- <para>
- The following sections provide procedures for submitting a change.
- </para>
-
- <section id='pushing-a-change-upstream'>
- <title>Using Scripts to Push a Change Upstream and Request a Pull</title>
-
- <para>
- Follow this procedure to push a change to an upstream "contrib"
- Git repository:
- <note>
- You can find general Git information on how to push a change
- upstream in the
- <ulink url='http://git-scm.com/book/en/v2/Distributed-Git-Distributed-Workflows'>Git Community Book</ulink>.
- </note>
- <orderedlist>
- <listitem><para>
- <emphasis>Make Your Changes Locally:</emphasis>
- Make your changes in your local Git repository.
- You should make small, controlled, isolated changes.
- Keeping changes small and isolated aids review,
- makes merging/rebasing easier and keeps the change
- history clean should anyone need to refer to it in
- future.
- </para></listitem>
- <listitem><para>
- <emphasis>Stage Your Changes:</emphasis>
- Stage your changes by using the <filename>git add</filename>
- command on each file you changed.
- </para></listitem>
- <listitem><para id='making-sure-you-have-correct-commit-information'>
- <emphasis>Commit Your Changes:</emphasis>
- Commit the change by using the
- <filename>git commit</filename> command.
- Make sure your commit information follows standards by
- following these accepted conventions:
- <itemizedlist>
- <listitem><para>
- Be sure to include a "Signed-off-by:" line in the
- same style as required by the Linux kernel.
- Adding this line signifies that you, the submitter,
- have agreed to the Developer's Certificate of
- Origin 1.1 as follows:
- <literallayout class='monospaced'>
- Developer's Certificate of Origin 1.1
-
- By making a contribution to this project, I certify that:
-
- (a) The contribution was created in whole or in part by me and I
- have the right to submit it under the open source license
- indicated in the file; or
-
- (b) The contribution is based upon previous work that, to the best
- of my knowledge, is covered under an appropriate open source
- license and I have the right under that license to submit that
- work with modifications, whether created in whole or in part
- by me, under the same open source license (unless I am
- permitted to submit under a different license), as indicated
- in the file; or
-
- (c) The contribution was provided directly to me by some other
- person who certified (a), (b) or (c) and I have not modified
- it.
-
- (d) I understand and agree that this project and the contribution
- are public and that a record of the contribution (including all
- personal information I submit with it, including my sign-off) is
- maintained indefinitely and may be redistributed consistent with
- this project or the open source license(s) involved.
- </literallayout>
- </para></listitem>
- <listitem><para>
- Provide a single-line summary of the change.
- and,
- if more explanation is needed, provide more
- detail in the body of the commit.
- This summary is typically viewable in the
- "shortlist" of changes.
- Thus, providing something short and descriptive
- that gives the reader a summary of the change is
- useful when viewing a list of many commits.
- You should prefix this short description with the
- recipe name (if changing a recipe), or else with
- the short form path to the file being changed.
- </para></listitem>
- <listitem><para>
- For the body of the commit message, provide
- detailed information that describes what you
- changed, why you made the change, and the approach
- you used.
- It might also be helpful if you mention how you
- tested the change.
- Provide as much detail as you can in the body of
- the commit message.
- <note>
- You do not need to provide a more detailed
- explanation of a change if the change is
- minor to the point of the single line
- summary providing all the information.
- </note>
- </para></listitem>
- <listitem><para>
- If the change addresses a specific bug or issue
- that is associated with a bug-tracking ID,
- include a reference to that ID in your detailed
- description.
- For example, the Yocto Project uses a specific
- convention for bug references - any commit that
- addresses a specific bug should use the following
- form for the detailed description.
- Be sure to use the actual bug-tracking ID from
- Bugzilla for
- <replaceable>bug-id</replaceable>:
- <literallayout class='monospaced'>
- Fixes [YOCTO #<replaceable>bug-id</replaceable>]
-
- <replaceable>detailed description of change</replaceable>
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Push Your Commits to a "Contrib" Upstream:</emphasis>
- If you have arranged for permissions to push to an
- upstream contrib repository, push the change to that
- repository:
- <literallayout class='monospaced'>
- $ git push <replaceable>upstream_remote_repo</replaceable> <replaceable>local_branch_name</replaceable>
- </literallayout>
- For example, suppose you have permissions to push into the
- upstream <filename>meta-intel-contrib</filename>
- repository and you are working in a local branch named
- <replaceable>your_name</replaceable><filename>/README</filename>.
- The following command pushes your local commits to the
- <filename>meta-intel-contrib</filename> upstream
- repository and puts the commit in a branch named
- <replaceable>your_name</replaceable><filename>/README</filename>:
- <literallayout class='monospaced'>
- $ git push meta-intel-contrib <replaceable>your_name</replaceable>/README
- </literallayout>
- </para></listitem>
- <listitem><para id='push-determine-who-to-notify'>
- <emphasis>Determine Who to Notify:</emphasis>
- Determine the maintainer or the mailing list
- that you need to notify for the change.</para>
-
- <para>Before submitting any change, you need to be sure
- who the maintainer is or what mailing list that you need
- to notify.
- Use either these methods to find out:
- <itemizedlist>
- <listitem><para>
- <emphasis>Maintenance File:</emphasis>
- Examine the <filename>maintainers.inc</filename>
- file, which is located in the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- at
- <filename>meta/conf/distro/include</filename>,
- to see who is responsible for code.
- </para></listitem>
- <listitem><para>
- <emphasis>Search by File:</emphasis>
- Using <ulink url='&YOCTO_DOCS_OM_URL;#git'>Git</ulink>,
- you can enter the following command to bring up a
- short list of all commits against a specific file:
- <literallayout class='monospaced'>
- git shortlog -- <replaceable>filename</replaceable>
- </literallayout>
- Just provide the name of the file for which you
- are interested.
- The information returned is not ordered by history
- but does include a list of everyone who has
- committed grouped by name.
- From the list, you can see who is responsible for
- the bulk of the changes against the file.
- </para></listitem>
- <listitem><para>
- <emphasis>Examine the List of Mailing Lists:</emphasis>
- For a list of the Yocto Project and related mailing
- lists, see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Mailing lists</ulink>"
- section in the Yocto Project Reference Manual.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Make a Pull Request:</emphasis>
- Notify the maintainer or the mailing list that you have
- pushed a change by making a pull request.</para>
-
- <para>The Yocto Project provides two scripts that
- conveniently let you generate and send pull requests to the
- Yocto Project.
- These scripts are <filename>create-pull-request</filename>
- and <filename>send-pull-request</filename>.
- You can find these scripts in the
- <filename>scripts</filename> directory within the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- (e.g. <filename>~/poky/scripts</filename>).
- </para>
-
- <para>Using these scripts correctly formats the requests
- without introducing any whitespace or HTML formatting.
- The maintainer that receives your patches either directly
- or through the mailing list needs to be able to save and
- apply them directly from your emails.
- Using these scripts is the preferred method for sending
- patches.</para>
-
- <para>First, create the pull request.
- For example, the following command runs the script,
- specifies the upstream repository in the contrib directory
- into which you pushed the change, and provides a subject
- line in the created patch files:
- <literallayout class='monospaced'>
- $ ~/poky/scripts/create-pull-request -u meta-intel-contrib -s "Updated Manual Section Reference in README"
- </literallayout>
- Running this script forms
- <filename>*.patch</filename> files in a folder named
- <filename>pull-</filename><replaceable>PID</replaceable>
- in the current directory.
- One of the patch files is a cover letter.</para>
-
- <para>Before running the
- <filename>send-pull-request</filename> script, you must
- edit the cover letter patch to insert information about
- your change.
- After editing the cover letter, send the pull request.
- For example, the following command runs the script and
- specifies the patch directory and email address.
- In this example, the email address is a mailing list:
- <literallayout class='monospaced'>
- $ ~/poky/scripts/send-pull-request -p ~/meta-intel/pull-10565 -t meta-intel@yoctoproject.org
- </literallayout>
- You need to follow the prompts as the script is
- interactive.
- <note>
- For help on using these scripts, simply provide the
- <filename>-h</filename> argument as follows:
- <literallayout class='monospaced'>
- $ poky/scripts/create-pull-request -h
- $ poky/scripts/send-pull-request -h
- </literallayout>
- </note>
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='submitting-a-patch'>
- <title>Using Email to Submit a Patch</title>
-
- <para>
- You can submit patches without using the
- <filename>create-pull-request</filename> and
- <filename>send-pull-request</filename> scripts described in the
- previous section.
- However, keep in mind, the preferred method is to use the scripts.
- </para>
-
- <para>
- Depending on the components changed, you need to submit the email
- to a specific mailing list.
- For some guidance on which mailing list to use, see the
- <link linkend='figuring-out-the-mailing-list-to-use'>list</link>
- at the beginning of this section.
- For a description of all the available mailing lists, see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Mailing Lists</ulink>"
- section in the Yocto Project Reference Manual.
- </para>
-
- <para>
- Here is the general procedure on how to submit a patch through
- email without using the scripts:
- <orderedlist>
- <listitem><para>
- <emphasis>Make Your Changes Locally:</emphasis>
- Make your changes in your local Git repository.
- You should make small, controlled, isolated changes.
- Keeping changes small and isolated aids review,
- makes merging/rebasing easier and keeps the change
- history clean should anyone need to refer to it in
- future.
- </para></listitem>
- <listitem><para>
- <emphasis>Stage Your Changes:</emphasis>
- Stage your changes by using the <filename>git add</filename>
- command on each file you changed.
- </para></listitem>
- <listitem><para>
- <emphasis>Commit Your Changes:</emphasis>
- Commit the change by using the
- <filename>git commit --signoff</filename> command.
- Using the <filename>--signoff</filename> option identifies
- you as the person making the change and also satisfies
- the Developer's Certificate of Origin (DCO) shown earlier.
- </para>
-
- <para>When you form a commit, you must follow certain
- standards established by the Yocto Project development
- team.
- See
- <link linkend='making-sure-you-have-correct-commit-information'>Step 3</link>
- in the previous section for information on how to
- provide commit information that meets Yocto Project
- commit message standards.
- </para></listitem>
- <listitem><para>
- <emphasis>Format the Commit:</emphasis>
- Format the commit into an email message.
- To format commits, use the
- <filename>git format-patch</filename> command.
- When you provide the command, you must include a revision
- list or a number of patches as part of the command.
- For example, either of these two commands takes your most
- recent single commit and formats it as an email message in
- the current directory:
- <literallayout class='monospaced'>
- $ git format-patch -1
- </literallayout>
- or
- <literallayout class='monospaced'>
- $ git format-patch HEAD~
- </literallayout></para>
-
- <para>After the command is run, the current directory
- contains a numbered <filename>.patch</filename> file for
- the commit.</para>
-
- <para>If you provide several commits as part of the
- command, the <filename>git format-patch</filename> command
- produces a series of numbered files in the current
- directory – one for each commit.
- If you have more than one patch, you should also use the
- <filename>--cover</filename> option with the command,
- which generates a cover letter as the first "patch" in
- the series.
- You can then edit the cover letter to provide a
- description for the series of patches.
- For information on the
- <filename>git format-patch</filename> command,
- see <filename>GIT_FORMAT_PATCH(1)</filename> displayed
- using the <filename>man git-format-patch</filename>
- command.
- <note>
- If you are or will be a frequent contributor to the
- Yocto Project or to OpenEmbedded, you might consider
- requesting a contrib area and the necessary associated
- rights.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Import the Files Into Your Mail Client:</emphasis>
- Import the files into your mail client by using the
- <filename>git send-email</filename> command.
- <note>
- In order to use <filename>git send-email</filename>,
- you must have the proper Git packages installed on
- your host.
- For Ubuntu, Debian, and Fedora the package is
- <filename>git-email</filename>.
- </note></para>
-
- <para>The <filename>git send-email</filename> command
- sends email by using a local or remote Mail Transport Agent
- (MTA) such as <filename>msmtp</filename>,
- <filename>sendmail</filename>, or through a direct
- <filename>smtp</filename> configuration in your Git
- <filename>~/.gitconfig</filename> file.
- If you are submitting patches through email only, it is
- very important that you submit them without any whitespace
- or HTML formatting that either you or your mailer
- introduces.
- The maintainer that receives your patches needs to be able
- to save and apply them directly from your emails.
- A good way to verify that what you are sending will be
- applicable by the maintainer is to do a dry run and send
- them to yourself and then save and apply them as the
- maintainer would.</para>
-
- <para>The <filename>git send-email</filename> command is
- the preferred method for sending your patches using
- email since there is no risk of compromising whitespace
- in the body of the message, which can occur when you use
- your own mail client.
- The command also has several options that let you
- specify recipients and perform further editing of the
- email message.
- For information on how to use the
- <filename>git send-email</filename> command,
- see <filename>GIT-SEND-EMAIL(1)</filename> displayed using
- the <filename>man git-send-email</filename> command.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
- </section>
- </section>
-
- <section id='working-with-licenses'>
- <title>Working With Licenses</title>
-
- <para>
- As mentioned in the
- "<ulink url='&YOCTO_DOCS_OM_URL;#licensing'>Licensing</ulink>"
- section in the Yocto Project Overview and Concepts Manual,
- open source projects are open to the public and they
- consequently have different licensing structures in place.
- This section describes the mechanism by which the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-system-term'>OpenEmbedded build system</ulink>
- tracks changes to licensing text and covers how to maintain open
- source license compliance during your project's lifecycle.
- The section also describes how to enable commercially licensed
- recipes, which by default are disabled.
- </para>
-
- <section id="usingpoky-configuring-LIC_FILES_CHKSUM">
- <title>Tracking License Changes</title>
-
- <para>
- The license of an upstream project might change in the future.
- In order to prevent these changes going unnoticed, the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LIC_FILES_CHKSUM'><filename>LIC_FILES_CHKSUM</filename></ulink>
- variable tracks changes to the license text. The checksums are
- validated at the end of the configure step, and if the
- checksums do not match, the build will fail.
- </para>
-
- <section id="usingpoky-specifying-LIC_FILES_CHKSUM">
- <title>Specifying the <filename>LIC_FILES_CHKSUM</filename> Variable</title>
-
- <para>
- The <filename>LIC_FILES_CHKSUM</filename>
- variable contains checksums of the license text in the
- source code for the recipe.
- Following is an example of how to specify
- <filename>LIC_FILES_CHKSUM</filename>:
- <literallayout class='monospaced'>
- LIC_FILES_CHKSUM = "file://COPYING;md5=xxxx \
- file://licfile1.txt;beginline=5;endline=29;md5=yyyy \
- file://licfile2.txt;endline=50;md5=zzzz \
- ..."
- </literallayout>
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- When using "beginline" and "endline", realize
- that line numbering begins with one and not
- zero.
- Also, the included lines are inclusive (i.e.
- lines five through and including 29 in the
- previous example for
- <filename>licfile1.txt</filename>).
- </para></listitem>
- <listitem><para>
- When a license check fails, the selected license
- text is included as part of the QA message.
- Using this output, you can determine the exact
- start and finish for the needed license text.
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
-
- <para>
- The build system uses the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink>
- variable as the default directory when searching files
- listed in <filename>LIC_FILES_CHKSUM</filename>.
- The previous example employs the default directory.
- </para>
-
- <para>
- Consider this next example:
- <literallayout class='monospaced'>
- LIC_FILES_CHKSUM = "file://src/ls.c;beginline=5;endline=16;\
- md5=bb14ed3c4cda583abc85401304b5cd4e"
- LIC_FILES_CHKSUM = "file://${WORKDIR}/license.html;md5=5c94767cedb5d6987c902ac850ded2c6"
- </literallayout>
- </para>
-
- <para>
- The first line locates a file in
- <filename>${S}/src/ls.c</filename> and isolates lines five
- through 16 as license text.
- The second line refers to a file in
- <ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink>.
- </para>
-
- <para>
- Note that <filename>LIC_FILES_CHKSUM</filename> variable is
- mandatory for all recipes, unless the
- <filename>LICENSE</filename> variable is set to "CLOSED".
- </para>
- </section>
-
- <section id="usingpoky-LIC_FILES_CHKSUM-explanation-of-syntax">
- <title>Explanation of Syntax</title>
-
- <para>
- As mentioned in the previous section, the
- <filename>LIC_FILES_CHKSUM</filename> variable lists all
- the important files that contain the license text for the
- source code.
- It is possible to specify a checksum for an entire file,
- or a specific section of a file (specified by beginning and
- ending line numbers with the "beginline" and "endline"
- parameters, respectively).
- The latter is useful for source files with a license
- notice header, README documents, and so forth.
- If you do not use the "beginline" parameter, then it is
- assumed that the text begins on the first line of the file.
- Similarly, if you do not use the "endline" parameter,
- it is assumed that the license text ends with the last
- line of the file.
- </para>
-
- <para>
- The "md5" parameter stores the md5 checksum of the license
- text.
- If the license text changes in any way as compared to
- this parameter then a mismatch occurs.
- This mismatch triggers a build failure and notifies
- the developer.
- Notification allows the developer to review and address
- the license text changes.
- Also note that if a mismatch occurs during the build,
- the correct md5 checksum is placed in the build log and
- can be easily copied to the recipe.
- </para>
-
- <para>
- There is no limit to how many files you can specify using
- the <filename>LIC_FILES_CHKSUM</filename> variable.
- Generally, however, every project requires a few
- specifications for license tracking.
- Many projects have a "COPYING" file that stores the
- license information for all the source code files.
- This practice allows you to just track the "COPYING"
- file as long as it is kept up to date.
- <note><title>Tips</title>
- <itemizedlist>
- <listitem><para>
- If you specify an empty or invalid "md5"
- parameter,
- <ulink url='&YOCTO_DOCS_REF_URL;#bitbake-term'>BitBake</ulink>
- returns an md5 mis-match
- error and displays the correct "md5" parameter
- value during the build.
- The correct parameter is also captured in
- the build log.
- </para></listitem>
- <listitem><para>
- If the whole file contains only license text,
- you do not need to use the "beginline" and
- "endline" parameters.
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
- </section>
- </section>
-
- <section id="enabling-commercially-licensed-recipes">
- <title>Enabling Commercially Licensed Recipes</title>
-
- <para>
- By default, the OpenEmbedded build system disables
- components that have commercial or other special licensing
- requirements.
- Such requirements are defined on a
- recipe-by-recipe basis through the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LICENSE_FLAGS'><filename>LICENSE_FLAGS</filename></ulink>
- variable definition in the affected recipe.
- For instance, the
- <filename>poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly</filename>
- recipe contains the following statement:
- <literallayout class='monospaced'>
- LICENSE_FLAGS = "commercial"
- </literallayout>
- Here is a slightly more complicated example that contains both
- an explicit recipe name and version (after variable expansion):
- <literallayout class='monospaced'>
- LICENSE_FLAGS = "license_${PN}_${PV}"
- </literallayout>
- In order for a component restricted by a
- <filename>LICENSE_FLAGS</filename> definition to be enabled and
- included in an image, it needs to have a matching entry in the
- global
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LICENSE_FLAGS_WHITELIST'><filename>LICENSE_FLAGS_WHITELIST</filename></ulink>
- variable, which is a variable typically defined in your
- <filename>local.conf</filename> file.
- For example, to enable the
- <filename>poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly</filename>
- package, you could add either the string
- "commercial_gst-plugins-ugly" or the more general string
- "commercial" to <filename>LICENSE_FLAGS_WHITELIST</filename>.
- See the
- "<link linkend='license-flag-matching'>License Flag Matching</link>"
- section for a full
- explanation of how <filename>LICENSE_FLAGS</filename> matching
- works.
- Here is the example:
- <literallayout class='monospaced'>
- LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly"
- </literallayout>
- Likewise, to additionally enable the package built from the
- recipe containing
- <filename>LICENSE_FLAGS = "license_${PN}_${PV}"</filename>,
- and assuming that the actual recipe name was
- <filename>emgd_1.10.bb</filename>, the following string would
- enable that package as well as the original
- <filename>gst-plugins-ugly</filename> package:
- <literallayout class='monospaced'>
- LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly license_emgd_1.10"
- </literallayout>
- As a convenience, you do not need to specify the complete
- license string in the whitelist for every package.
- You can use an abbreviated form, which consists
- of just the first portion or portions of the license
- string before the initial underscore character or characters.
- A partial string will match any license that contains the
- given string as the first portion of its license.
- For example, the following whitelist string will also match
- both of the packages previously mentioned as well as any other
- packages that have licenses starting with "commercial" or
- "license".
- <literallayout class='monospaced'>
- LICENSE_FLAGS_WHITELIST = "commercial license"
- </literallayout>
- </para>
-
- <section id="license-flag-matching">
- <title>License Flag Matching</title>
-
- <para>
- License flag matching allows you to control what recipes
- the OpenEmbedded build system includes in the build.
- Fundamentally, the build system attempts to match
- <filename>LICENSE_FLAGS</filename> strings found in recipes
- against <filename>LICENSE_FLAGS_WHITELIST</filename>
- strings found in the whitelist.
- A match causes the build system to include a recipe in the
- build, while failure to find a match causes the build
- system to exclude a recipe.
- </para>
-
- <para>
- In general, license flag matching is simple.
- However, understanding some concepts will help you
- correctly and effectively use matching.
- </para>
-
- <para>
- Before a flag
- defined by a particular recipe is tested against the
- contents of the whitelist, the expanded string
- <filename>_${PN}</filename> is appended to the flag.
- This expansion makes each
- <filename>LICENSE_FLAGS</filename> value recipe-specific.
- After expansion, the string is then matched against the
- whitelist.
- Thus, specifying
- <filename>LICENSE_FLAGS = "commercial"</filename>
- in recipe "foo", for example, results in the string
- <filename>"commercial_foo"</filename>.
- And, to create a match, that string must appear in the
- whitelist.
- </para>
-
- <para>
- Judicious use of the <filename>LICENSE_FLAGS</filename>
- strings and the contents of the
- <filename>LICENSE_FLAGS_WHITELIST</filename> variable
- allows you a lot of flexibility for including or excluding
- recipes based on licensing.
- For example, you can broaden the matching capabilities by
- using license flags string subsets in the whitelist.
- <note>
- When using a string subset, be sure to use the part of
- the expanded string that precedes the appended
- underscore character (e.g.
- <filename>usethispart_1.3</filename>,
- <filename>usethispart_1.4</filename>, and so forth).
- </note>
- For example, simply specifying the string "commercial" in
- the whitelist matches any expanded
- <filename>LICENSE_FLAGS</filename> definition that starts
- with the string "commercial" such as "commercial_foo" and
- "commercial_bar", which are the strings the build system
- automatically generates for hypothetical recipes named
- "foo" and "bar" assuming those recipes simply specify the
- following:
- <literallayout class='monospaced'>
- LICENSE_FLAGS = "commercial"
- </literallayout>
- Thus, you can choose to exhaustively
- enumerate each license flag in the whitelist and
- allow only specific recipes into the image, or
- you can use a string subset that causes a broader range of
- matches to allow a range of recipes into the image.
- </para>
-
- <para>
- This scheme works even if the
- <filename>LICENSE_FLAGS</filename> string already
- has <filename>_${PN}</filename> appended.
- For example, the build system turns the license flag
- "commercial_1.2_foo" into "commercial_1.2_foo_foo" and
- would match both the general "commercial" and the specific
- "commercial_1.2_foo" strings found in the whitelist, as
- expected.
- </para>
-
- <para>
- Here are some other scenarios:
- <itemizedlist>
- <listitem><para>
- You can specify a versioned string in the recipe
- such as "commercial_foo_1.2" in a "foo" recipe.
- The build system expands this string to
- "commercial_foo_1.2_foo".
- Combine this license flag with a whitelist that has
- the string "commercial" and you match the flag
- along with any other flag that starts with the
- string "commercial".
- </para></listitem>
- <listitem><para>
- Under the same circumstances, you can use
- "commercial_foo" in the whitelist and the build
- system not only matches "commercial_foo_1.2" but
- also matches any license flag with the string
- "commercial_foo", regardless of the version.
- </para></listitem>
- <listitem><para>
- You can be very specific and use both the
- package and version parts in the whitelist (e.g.
- "commercial_foo_1.2") to specifically match a
- versioned recipe.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id="other-variables-related-to-commercial-licenses">
- <title>Other Variables Related to Commercial Licenses</title>
-
- <para>
- Other helpful variables related to commercial
- license handling exist and are defined in the
- <filename>poky/meta/conf/distro/include/default-distrovars.inc</filename> file:
- <literallayout class='monospaced'>
- COMMERCIAL_AUDIO_PLUGINS ?= ""
- COMMERCIAL_VIDEO_PLUGINS ?= ""
- </literallayout>
- If you want to enable these components, you can do so by
- making sure you have statements similar to the following
- in your <filename>local.conf</filename> configuration file:
- <literallayout class='monospaced'>
- COMMERCIAL_AUDIO_PLUGINS = "gst-plugins-ugly-mad \
- gst-plugins-ugly-mpegaudioparse"
- COMMERCIAL_VIDEO_PLUGINS = "gst-plugins-ugly-mpeg2dec \
- gst-plugins-ugly-mpegstream gst-plugins-bad-mpegvideoparse"
- LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly commercial_gst-plugins-bad commercial_qmmp"
- </literallayout>
- Of course, you could also create a matching whitelist
- for those components using the more general "commercial"
- in the whitelist, but that would also enable all the
- other packages with <filename>LICENSE_FLAGS</filename>
- containing "commercial", which you may or may not want:
- <literallayout class='monospaced'>
- LICENSE_FLAGS_WHITELIST = "commercial"
- </literallayout>
- </para>
-
- <para>
- Specifying audio and video plugins as part of the
- <filename>COMMERCIAL_AUDIO_PLUGINS</filename> and
- <filename>COMMERCIAL_VIDEO_PLUGINS</filename> statements
- (along with the enabling
- <filename>LICENSE_FLAGS_WHITELIST</filename>) includes the
- plugins or components into built images, thus adding
- support for media formats or components.
- </para>
- </section>
- </section>
-
- <section id='maintaining-open-source-license-compliance-during-your-products-lifecycle'>
- <title>Maintaining Open Source License Compliance During Your Product's Lifecycle</title>
-
- <para>
- One of the concerns for a development organization using open source
- software is how to maintain compliance with various open source
- licensing during the lifecycle of the product.
- While this section does not provide legal advice or
- comprehensively cover all scenarios, it does
- present methods that you can use to
- assist you in meeting the compliance requirements during a software
- release.
- </para>
-
- <para>
- With hundreds of different open source licenses that the Yocto
- Project tracks, it is difficult to know the requirements of each
- and every license.
- However, the requirements of the major FLOSS licenses can begin
- to be covered by
- assuming that three main areas of concern exist:
- <itemizedlist>
- <listitem><para>Source code must be provided.</para></listitem>
- <listitem><para>License text for the software must be
- provided.</para></listitem>
- <listitem><para>Compilation scripts and modifications to the
- source code must be provided.
- </para></listitem>
- </itemizedlist>
- There are other requirements beyond the scope of these
- three and the methods described in this section
- (e.g. the mechanism through which source code is distributed).
- </para>
-
- <para>
- As different organizations have different methods of complying with
- open source licensing, this section is not meant to imply that
- there is only one single way to meet your compliance obligations,
- but rather to describe one method of achieving compliance.
- The remainder of this section describes methods supported to meet the
- previously mentioned three requirements.
- Once you take steps to meet these requirements,
- and prior to releasing images, sources, and the build system,
- you should audit all artifacts to ensure completeness.
- <note>
- The Yocto Project generates a license manifest during
- image creation that is located
- in <filename>${DEPLOY_DIR}/licenses/<replaceable>image_name-datestamp</replaceable></filename>
- to assist with any audits.
- </note>
- </para>
-
- <section id='providing-the-source-code'>
- <title>Providing the Source Code</title>
-
- <para>
- Compliance activities should begin before you generate the
- final image.
- The first thing you should look at is the requirement that
- tops the list for most compliance groups - providing
- the source.
- The Yocto Project has a few ways of meeting this
- requirement.
- </para>
-
- <para>
- One of the easiest ways to meet this requirement is
- to provide the entire
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DL_DIR'><filename>DL_DIR</filename></ulink>
- used by the build.
- This method, however, has a few issues.
- The most obvious is the size of the directory since it includes
- all sources used in the build and not just the source used in
- the released image.
- It will include toolchain source, and other artifacts, which
- you would not generally release.
- However, the more serious issue for most companies is accidental
- release of proprietary software.
- The Yocto Project provides an
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-archiver'><filename>archiver</filename></ulink>
- class to help avoid some of these concerns.
- </para>
-
- <para>
- Before you employ <filename>DL_DIR</filename> or the
- <filename>archiver</filename> class, you need to decide how
- you choose to provide source.
- The source <filename>archiver</filename> class can generate
- tarballs and SRPMs and can create them with various levels of
- compliance in mind.
- </para>
-
- <para>
- One way of doing this (but certainly not the only way) is to
- release just the source as a tarball.
- You can do this by adding the following to the
- <filename>local.conf</filename> file found in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
- <literallayout class='monospaced'>
- INHERIT += "archiver"
- ARCHIVER_MODE[src] = "original"
- </literallayout>
- During the creation of your image, the source from all
- recipes that deploy packages to the image is placed within
- subdirectories of
- <filename>DEPLOY_DIR/sources</filename> based on the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LICENSE'><filename>LICENSE</filename></ulink>
- for each recipe.
- Releasing the entire directory enables you to comply with
- requirements concerning providing the unmodified source.
- It is important to note that the size of the directory can
- get large.
- </para>
-
- <para>
- A way to help mitigate the size issue is to only release
- tarballs for licenses that require the release of
- source.
- Let us assume you are only concerned with GPL code as
- identified by running the following script:
- <literallayout class='monospaced'>
- # Script to archive a subset of packages matching specific license(s)
- # Source and license files are copied into sub folders of package folder
- # Must be run from build folder
- #!/bin/bash
- src_release_dir="source-release"
- mkdir -p $src_release_dir
- for a in tmp/deploy/sources/*; do
- for d in $a/*; do
- # Get package name from path
- p=`basename $d`
- p=${p%-*}
- p=${p%-*}
- # Only archive GPL packages (update *GPL* regex for your license check)
- numfiles=`ls tmp/deploy/licenses/$p/*GPL* 2> /dev/null | wc -l`
- if [ $numfiles -gt 1 ]; then
- echo Archiving $p
- mkdir -p $src_release_dir/$p/source
- cp $d/* $src_release_dir/$p/source 2> /dev/null
- mkdir -p $src_release_dir/$p/license
- cp tmp/deploy/licenses/$p/* $src_release_dir/$p/license 2> /dev/null
- fi
- done
- done
- </literallayout>
- At this point, you could create a tarball from the
- <filename>gpl_source_release</filename> directory and
- provide that to the end user.
- This method would be a step toward achieving compliance
- with section 3a of GPLv2 and with section 6 of GPLv3.
- </para>
- </section>
-
- <section id='providing-license-text'>
- <title>Providing License Text</title>
-
- <para>
- One requirement that is often overlooked is inclusion
- of license text.
- This requirement also needs to be dealt with prior to
- generating the final image.
- Some licenses require the license text to accompany
- the binary.
- You can achieve this by adding the following to your
- <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- COPY_LIC_MANIFEST = "1"
- COPY_LIC_DIRS = "1"
- LICENSE_CREATE_PACKAGE = "1"
- </literallayout>
- Adding these statements to the configuration file ensures
- that the licenses collected during package generation
- are included on your image.
- <note>
- <para>Setting all three variables to "1" results in the
- image having two copies of the same license file.
- One copy resides in
- <filename>/usr/share/common-licenses</filename> and
- the other resides in
- <filename>/usr/share/license</filename>.</para>
-
- <para>The reason for this behavior is because
- <ulink url='&YOCTO_DOCS_REF_URL;#var-COPY_LIC_DIRS'><filename>COPY_LIC_DIRS</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#var-COPY_LIC_MANIFEST'><filename>COPY_LIC_MANIFEST</filename></ulink>
- add a copy of the license when the image is built but do
- not offer a path for adding licenses for newly installed
- packages to an image.
- <ulink url='&YOCTO_DOCS_REF_URL;#var-LICENSE_CREATE_PACKAGE'><filename>LICENSE_CREATE_PACKAGE</filename></ulink>
- adds a separate package and an upgrade path for adding
- licenses to an image.</para>
- </note>
- </para>
-
- <para>
- As the source <filename>archiver</filename> class has already
- archived the original
- unmodified source that contains the license files,
- you would have already met the requirements for inclusion
- of the license information with source as defined by the GPL
- and other open source licenses.
- </para>
- </section>
-
- <section id='providing-compilation-scripts-and-source-code-modifications'>
- <title>Providing Compilation Scripts and Source Code Modifications</title>
-
- <para>
- At this point, we have addressed all we need to
- prior to generating the image.
- The next two requirements are addressed during the final
- packaging of the release.
- </para>
-
- <para>
- By releasing the version of the OpenEmbedded build system
- and the layers used during the build, you will be providing both
- compilation scripts and the source code modifications in one
- step.
- </para>
-
- <para>
- If the deployment team has a
- <ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP layer</ulink>
- and a distro layer, and those those layers are used to patch,
- compile, package, or modify (in any way) any open source
- software included in your released images, you
- might be required to release those layers under section 3 of
- GPLv2 or section 1 of GPLv3.
- One way of doing that is with a clean
- checkout of the version of the Yocto Project and layers used
- during your build.
- Here is an example:
- <literallayout class='monospaced'>
- # We built using the &DISTRO_NAME_NO_CAP; branch of the poky repo
- $ git clone -b &DISTRO_NAME_NO_CAP; git://git.yoctoproject.org/poky
- $ cd poky
- # We built using the release_branch for our layers
- $ git clone -b release_branch git://git.mycompany.com/meta-my-bsp-layer
- $ git clone -b release_branch git://git.mycompany.com/meta-my-software-layer
- # clean up the .git repos
- $ find . -name ".git" -type d -exec rm -rf {} \;
- </literallayout>
- One thing a development organization might want to consider
- for end-user convenience is to modify
- <filename>meta-poky/conf/bblayers.conf.sample</filename> to
- ensure that when the end user utilizes the released build
- system to build an image, the development organization's
- layers are included in the <filename>bblayers.conf</filename>
- file automatically:
- <literallayout class='monospaced'>
- # POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
- # changes incompatibly
- POKY_BBLAYERS_CONF_VERSION = "2"
-
- BBPATH = "${TOPDIR}"
- BBFILES ?= ""
-
- BBLAYERS ?= " \
- ##OEROOT##/meta \
- ##OEROOT##/meta-poky \
- ##OEROOT##/meta-yocto-bsp \
- ##OEROOT##/meta-mylayer \
- "
- </literallayout>
- Creating and providing an archive of the
- <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>
- layers (recipes, configuration files, and so forth)
- enables you to meet your
- requirements to include the scripts to control compilation
- as well as any modifications to the original source.
- </para>
- </section>
- </section>
-
- <section id='copying-licenses-that-do-not-exist'>
- <title>Copying Licenses that Do Not Exist</title>
-
- <para>
- Some packages, such as the linux-firmware package, have many
- licenses that are not in any way common.
- You can avoid adding a lot of these types of common license
- files, which are only applicable to a specific package, by using
- the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-NO_GENERIC_LICENSE'><filename>NO_GENERIC_LICENSE</filename></ulink>
- variable.
- Using this variable also avoids QA errors when you use a
- non-common, non-CLOSED license in a recipe.
- </para>
-
- <para>
- The following is an example that uses the
- <filename>LICENSE.Abilis.txt</filename>
- file as the license from the fetched source:
- <literallayout class='monospaced'>
- NO_GENERIC_LICENSE[Firmware-Abilis] = "LICENSE.Abilis.txt"
- </literallayout>
- </para>
- </section>
- </section>
-
- <section id='using-the-error-reporting-tool'>
- <title>Using the Error Reporting Tool</title>
-
- <para>
- The error reporting tool allows you to
- submit errors encountered during builds to a central database.
- Outside of the build environment, you can use a web interface to
- browse errors, view statistics, and query for errors.
- The tool works using a client-server system where the client
- portion is integrated with the installed Yocto Project
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- (e.g. <filename>poky</filename>).
- The server receives the information collected and saves it in a
- database.
- </para>
-
- <para>
- A live instance of the error reporting server exists at
- <ulink url='http://errors.yoctoproject.org'></ulink>.
- This server exists so that when you want to get help with
- build failures, you can submit all of the information on the
- failure easily and then point to the URL in your bug report
- or send an email to the mailing list.
- <note>
- If you send error reports to this server, the reports become
- publicly visible.
- </note>
- </para>
-
- <section id='enabling-and-using-the-tool'>
- <title>Enabling and Using the Tool</title>
-
- <para>
- By default, the error reporting tool is disabled.
- You can enable it by inheriting the
- <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-report-error'><filename>report-error</filename></ulink>
- class by adding the following statement to the end of
- your <filename>local.conf</filename> file in your
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- <literallayout class='monospaced'>
- INHERIT += "report-error"
- </literallayout>
- </para>
-
- <para>
- By default, the error reporting feature stores information in
- <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-LOG_DIR'><filename>LOG_DIR</filename></ulink><filename>}/error-report</filename>.
- However, you can specify a directory to use by adding the following
- to your <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- ERR_REPORT_DIR = "path"
- </literallayout>
- Enabling error reporting causes the build process to collect
- the errors and store them in a file as previously described.
- When the build system encounters an error, it includes a
- command as part of the console output.
- You can run the command to send the error file to the server.
- For example, the following command sends the errors to an
- upstream server:
- <literallayout class='monospaced'>
- $ send-error-report /home/brandusa/project/poky/build/tmp/log/error-report/error_report_201403141617.txt
- </literallayout>
- In the previous example, the errors are sent to a public
- database available at
- <ulink url='http://errors.yoctoproject.org'></ulink>, which is
- used by the entire community.
- If you specify a particular server, you can send the errors
- to a different database.
- Use the following command for more information on available
- options:
- <literallayout class='monospaced'>
- $ send-error-report --help
- </literallayout>
- </para>
-
- <para>
- When sending the error file, you are prompted to review the
- data being sent as well as to provide a name and optional
- email address.
- Once you satisfy these prompts, the command returns a link
- from the server that corresponds to your entry in the database.
- For example, here is a typical link:
- <literallayout class='monospaced'>
- http://errors.yoctoproject.org/Errors/Details/9522/
- </literallayout>
- Following the link takes you to a web interface where you can
- browse, query the errors, and view statistics.
- </para>
- </section>
-
- <section id='disabling-the-tool'>
- <title>Disabling the Tool</title>
-
- <para>
- To disable the error reporting feature, simply remove or comment
- out the following statement from the end of your
- <filename>local.conf</filename> file in your
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- <literallayout class='monospaced'>
- INHERIT += "report-error"
- </literallayout>
- </para>
- </section>
-
- <section id='setting-up-your-own-error-reporting-server'>
- <title>Setting Up Your Own Error Reporting Server</title>
-
- <para>
- If you want to set up your own error reporting server, you
- can obtain the code from the Git repository at
- <ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/error-report-web/'></ulink>.
- Instructions on how to set it up are in the README document.
- </para>
- </section>
- </section>
-
- <section id="dev-using-wayland-and-weston">
- <title>Using Wayland and Weston</title>
-
- <para>
- <ulink url='http://en.wikipedia.org/wiki/Wayland_(display_server_protocol)'>Wayland</ulink>
- is a computer display server protocol that
- provides a method for compositing window managers to communicate
- directly with applications and video hardware and expects them to
- communicate with input hardware using other libraries.
- Using Wayland with supporting targets can result in better control
- over graphics frame rendering than an application might otherwise
- achieve.
- </para>
-
- <para>
- The Yocto Project provides the Wayland protocol libraries and the
- reference
- <ulink url='http://en.wikipedia.org/wiki/Wayland_(display_server_protocol)#Weston'>Weston</ulink>
- compositor as part of its release.
- You can find the integrated packages in the
- <filename>meta</filename> layer of the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
- Specifically, you can find the recipes that build both Wayland
- and Weston at <filename>meta/recipes-graphics/wayland</filename>.
- </para>
-
- <para>
- You can build both the Wayland and Weston packages for use only
- with targets that accept the
- <ulink url='https://en.wikipedia.org/wiki/Mesa_(computer_graphics)'>Mesa 3D and Direct Rendering Infrastructure</ulink>,
- which is also known as Mesa DRI.
- This implies that you cannot build and use the packages if your
- target uses, for example, the
- <trademark class='registered'>Intel</trademark> Embedded Media
- and Graphics Driver
- (<trademark class='registered'>Intel</trademark> EMGD) that
- overrides Mesa DRI.
- <note>
- Due to lack of EGL support, Weston 1.0.3 will not run
- directly on the emulated QEMU hardware.
- However, this version of Weston will run under X emulation
- without issues.
- </note>
- </para>
-
- <para>
- This section describes what you need to do to implement Wayland and
- use the Weston compositor when building an image for a supporting
- target.
- </para>
-
- <section id="enabling-wayland-in-an-image">
- <title>Enabling Wayland in an Image</title>
-
- <para>
- To enable Wayland, you need to enable it to be built and enable
- it to be included (installed) in the image.
- </para>
-
- <section id="enable-building">
- <title>Building</title>
-
- <para>
- To cause Mesa to build the <filename>wayland-egl</filename>
- platform and Weston to build Wayland with Kernel Mode
- Setting
- (<ulink url='https://wiki.archlinux.org/index.php/Kernel_Mode_Setting'>KMS</ulink>)
- support, include the "wayland" flag in the
- <ulink url="&YOCTO_DOCS_REF_URL;#var-DISTRO_FEATURES"><filename>DISTRO_FEATURES</filename></ulink>
- statement in your <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- DISTRO_FEATURES_append = " wayland"
- </literallayout>
- <note>
- If X11 has been enabled elsewhere, Weston will build
- Wayland with X11 support
- </note>
- </para>
- </section>
-
- <section id="enable-installation-in-an-image">
- <title>Installing</title>
-
- <para>
- To install the Wayland feature into an image, you must
- include the following
- <ulink url='&YOCTO_DOCS_REF_URL;#var-CORE_IMAGE_EXTRA_INSTALL'><filename>CORE_IMAGE_EXTRA_INSTALL</filename></ulink>
- statement in your <filename>local.conf</filename> file:
- <literallayout class='monospaced'>
- CORE_IMAGE_EXTRA_INSTALL += "wayland weston"
- </literallayout>
- </para>
- </section>
- </section>
-
- <section id="running-weston">
- <title>Running Weston</title>
-
- <para>
- To run Weston inside X11, enabling it as described earlier and
- building a Sato image is sufficient.
- If you are running your image under Sato, a Weston Launcher
- appears in the "Utility" category.
- </para>
-
- <para>
- Alternatively, you can run Weston through the command-line
- interpretor (CLI), which is better suited for development work.
- To run Weston under the CLI, you need to do the following after
- your image is built:
- <orderedlist>
- <listitem><para>
- Run these commands to export
- <filename>XDG_RUNTIME_DIR</filename>:
- <literallayout class='monospaced'>
- mkdir -p /tmp/$USER-weston
- chmod 0700 /tmp/$USER-weston
- export XDG_RUNTIME_DIR=/tmp/$USER-weston
- </literallayout>
- </para></listitem>
- <listitem><para>
- Launch Weston in the shell:
- <literallayout class='monospaced'>
- weston
- </literallayout></para></listitem>
- </orderedlist>
- </para>
- </section>
- </section>
-</chapter>
-
-<!--
-vim: expandtab tw=80 ts=4
--->
diff --git a/documentation/dev-manual/dev-manual-customization.xsl b/documentation/dev-manual/dev-manual-customization.xsl
deleted file mode 100644
index 523ea3c5ed..0000000000
--- a/documentation/dev-manual/dev-manual-customization.xsl
+++ /dev/null
@@ -1,27 +0,0 @@
-<?xml version='1.0'?>
-<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
-
- <xsl:import href="http://downloads.yoctoproject.org/mirror/docbook-mirror/docbook-xsl-1.76.1/xhtml/docbook.xsl" />
-
-<!--
-
- <xsl:import href="../template/1.76.1/docbook-xsl-1.76.1/xhtml/docbook.xsl" />
-
- <xsl:import href="http://docbook.sourceforge.net/release/xsl/1.76.1/xhtml/docbook.xsl" />
-
--->
-
- <xsl:include href="../template/permalinks.xsl"/>
- <xsl:include href="../template/section.title.xsl"/>
- <xsl:include href="../template/component.title.xsl"/>
- <xsl:include href="../template/division.title.xsl"/>
- <xsl:include href="../template/formal.object.heading.xsl"/>
-
- <xsl:param name="html.stylesheet" select="'dev-style.css'" />
- <xsl:param name="chapter.autolabel" select="1" />
- <xsl:param name="appendix.autolabel" select="A" />
- <xsl:param name="section.autolabel" select="1" />
- <xsl:param name="section.label.includes.component.label" select="1" />
- <xsl:param name="generate.id.attributes" select="1" />
-
-</xsl:stylesheet>
diff --git a/documentation/dev-manual/dev-manual-intro.xml b/documentation/dev-manual/dev-manual-intro.xml
deleted file mode 100644
index 3a34094b8c..0000000000
--- a/documentation/dev-manual/dev-manual-intro.xml
+++ /dev/null
@@ -1,103 +0,0 @@
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
-"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
-[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
-
-<chapter id='dev-manual-intro'>
-
-<title>The Yocto Project Development Tasks Manual</title>
- <section id='dev-welcome'>
- <title>Welcome</title>
-
- <para>
- Welcome to the Yocto Project Development Tasks Manual!
- This manual provides relevant procedures necessary for developing
- in the Yocto Project environment (i.e. developing embedded Linux
- images and user-space applications that run on targeted devices).
- The manual groups related procedures into higher-level sections.
- Procedures can consist of high-level steps or low-level steps
- depending on the topic.
- </para>
-
- <para>
- This manual provides the following:
- <itemizedlist>
- <listitem><para>
- Procedures that help you get going with the Yocto Project.
- For example, procedures that show you how to set up
- a build host and work with the Yocto Project
- source repositories.
- </para></listitem>
- <listitem><para>
- Procedures that show you how to submit changes to the
- Yocto Project.
- Changes can be improvements, new features, or bug
- fixes.
- </para></listitem>
- <listitem><para>
- Procedures related to "everyday" tasks you perform while
- developing images and applications using the Yocto
- Project.
- For example, procedures to create a layer, customize an
- image, write a new recipe, and so forth.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- This manual does not provide the following:
- <itemizedlist>
- <listitem><para>
- Redundant Step-by-step Instructions:
- For example, the
- <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
- manual contains detailed instructions on how to install an
- SDK, which is used to develop applications for target
- hardware.
- </para></listitem>
- <listitem><para>
- Reference or Conceptual Material:
- This type of material resides in an appropriate reference
- manual.
- For example, system variables are documented in the
- <ulink url='&YOCTO_DOCS_REF_URL;'>Yocto Project Reference Manual</ulink>.
- </para></listitem>
- <listitem><para>
- Detailed Public Information Not Specific to the
- Yocto Project:
- For example, exhaustive information on how to use the
- Source Control Manager Git is better covered with Internet
- searches and official Git Documentation than through the
- Yocto Project documentation.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='other-information'>
- <title>Other Information</title>
-
- <para>
- Because this manual presents information for many different
- topics, supplemental information is recommended for full
- comprehension.
- For introductory information on the Yocto Project, see the
- <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>.
- If you want to build an image with no knowledge of Yocto Project
- as a way of quickly testing it out, see the
- <ulink url='&YOCTO_DOCS_BRIEF_URL;'>Yocto Project Quick Build</ulink>
- document.
- </para>
-
- <para>
- For a comprehensive list of links and other documentation, see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#resources-links-and-related-documentation'>Links and Related Documentation</ulink>"
- section in the Yocto Project Reference Manual.
- </para>
-
- <para>
- </para>
- </section>
-</chapter>
-<!--
-vim: expandtab tw=80 ts=4
--->
diff --git a/documentation/dev-manual/dev-manual-qemu.xml b/documentation/dev-manual/dev-manual-qemu.xml
deleted file mode 100644
index 5ccc0dfe83..0000000000
--- a/documentation/dev-manual/dev-manual-qemu.xml
+++ /dev/null
@@ -1,690 +0,0 @@
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
-"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
-[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
-
-<chapter id='dev-manual-qemu'>
-
-<title>Using the Quick EMUlator (QEMU)</title>
-
- <para>
- The Yocto Project uses an implementation of the Quick EMUlator (QEMU)
- Open Source project as part of the Yocto Project development "tool
- set".
- This chapter provides both procedures that show you how to use the
- Quick EMUlator (QEMU) and other QEMU information helpful for
- development purposes.
- </para>
-
- <section id='qemu-dev-overview'>
- <title>Overview</title>
-
- <para>
- Within the context of the Yocto Project, QEMU is an
- emulator and virtualization machine that allows you to run a
- complete image you have built using the Yocto Project as just
- another task on your build system.
- QEMU is useful for running and testing images and applications on
- supported Yocto Project architectures without having actual
- hardware.
- Among other things, the Yocto Project uses QEMU to run automated
- Quality Assurance (QA) tests on final images shipped with each
- release.
- <note>
- This implementation is not the same as QEMU in general.
- </note>
- This section provides a brief reference for the Yocto Project
- implementation of QEMU.
- </para>
-
- <para>
- For official information and documentation on QEMU in general, see
- the following references:
- <itemizedlist>
- <listitem><para>
- <emphasis><ulink url='http://wiki.qemu.org/Main_Page'>QEMU Website</ulink>:</emphasis>
- The official website for the QEMU Open Source project.
- </para></listitem>
- <listitem><para>
- <emphasis><ulink url='http://wiki.qemu.org/Manual'>Documentation</ulink>:</emphasis>
- The QEMU user manual.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='qemu-running-qemu'>
- <title>Running QEMU</title>
-
- <para>
- To use QEMU, you need to have QEMU installed and initialized as
- well as have the proper artifacts (i.e. image files and root
- filesystems) available.
- Follow these general steps to run QEMU:
- <orderedlist>
- <listitem><para>
- <emphasis>Install QEMU:</emphasis>
- QEMU is made available with the Yocto Project a number of
- ways.
- One method is to install a Software Development Kit (SDK).
- See
- "<ulink url='&YOCTO_DOCS_SDK_URL;#the-qemu-emulator'>The QEMU Emulator</ulink>"
- section in the Yocto Project Application Development and
- the Extensible Software Development Kit (eSDK) manual
- for information on how to install QEMU.
- </para></listitem>
- <listitem><para>
- <emphasis>Setting Up the Environment:</emphasis>
- How you set up the QEMU environment depends on how you
- installed QEMU:
- <itemizedlist>
- <listitem><para>
- If you cloned the <filename>poky</filename>
- repository or you downloaded and unpacked a
- Yocto Project release tarball, you can source
- the build environment script (i.e.
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>):
- <literallayout class='monospaced'>
- $ cd ~/poky
- $ source oe-init-build-env
- </literallayout>
- </para></listitem>
- <listitem><para>
- If you installed a cross-toolchain, you can
- run the script that initializes the toolchain.
- For example, the following commands run the
- initialization script from the default
- <filename>poky_sdk</filename> directory:
- <literallayout class='monospaced'>
- . ~/poky_sdk/environment-setup-core2-64-poky-linux
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Ensure the Artifacts are in Place:</emphasis>
- You need to be sure you have a pre-built kernel that
- will boot in QEMU.
- You also need the target root filesystem for your target
- machine’s architecture:
- <itemizedlist>
- <listitem><para>
- If you have previously built an image for QEMU
- (e.g. <filename>qemux86</filename>,
- <filename>qemuarm</filename>, and so forth),
- then the artifacts are in place in your
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
- </para></listitem>
- <listitem><para>
- If you have not built an image, you can go to the
- <ulink url='&YOCTO_MACHINES_DL_URL;'>machines/qemu</ulink>
- area and download a pre-built image that matches
- your architecture and can be run on QEMU.
- </para></listitem>
- </itemizedlist></para>
-
- <para>See the
- "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-extracting-the-root-filesystem'>Extracting the Root Filesystem</ulink>"
- section in the Yocto Project Application Development and
- the Extensible Software Development Kit (eSDK) manual
- for information on how to extract a root filesystem.
- </para></listitem>
- <listitem><para>
- <emphasis>Run QEMU:</emphasis>
- The basic <filename>runqemu</filename> command syntax is as
- follows:
- <literallayout class='monospaced'>
- $ runqemu [<replaceable>option</replaceable> ] [...]
- </literallayout>
- Based on what you provide on the command line,
- <filename>runqemu</filename> does a good job of figuring
- out what you are trying to do.
- For example, by default, QEMU looks for the most recently
- built image according to the timestamp when it needs to
- look for an image.
- Minimally, through the use of options, you must provide
- either a machine name, a virtual machine image
- (<filename>*wic.vmdk</filename>), or a kernel image
- (<filename>*.bin</filename>).</para>
-
- <para>Here are some additional examples to help illustrate
- further QEMU:
- <itemizedlist>
- <listitem><para>
- This example starts QEMU with
- <replaceable>MACHINE</replaceable> set to "qemux86-64".
- Assuming a standard
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
- <filename>runqemu</filename> automatically finds the
- <filename>bzImage-qemux86-64.bin</filename> image file and
- the
- <filename>core-image-minimal-qemux86-64-20200218002850.rootfs.ext4</filename>
- (assuming the current build created a
- <filename>core-image-minimal</filename> image).
- <note>
- When more than one image with the same name exists, QEMU finds
- and uses the most recently built image according to the
- timestamp.
- </note>
- <literallayout class='monospaced'>
- $ runqemu qemux86-64
- </literallayout>
- </para></listitem>
- <listitem><para>
- This example produces the exact same results as the
- previous example.
- This command, however, specifically provides the image
- and root filesystem type.
- <literallayout class='monospaced'>
- $ runqemu qemux86-64 core-image-minimal ext4
- </literallayout>
- </para></listitem>
- <listitem><para>
- This example specifies to boot an initial RAM disk image
- and to enable audio in QEMU.
- For this case, <filename>runqemu</filename> set the
- internal variable <filename>FSTYPE</filename> to
- "cpio.gz".
- Also, for audio to be enabled, an appropriate driver must
- be installed (see the previous description for the
- <filename>audio</filename> option for more information).
- <literallayout class='monospaced'>
- $ runqemu qemux86-64 ramfs audio
- </literallayout>
- </para></listitem>
- <listitem><para>
- This example does not provide enough information for
- QEMU to launch.
- While the command does provide a root filesystem type, it
- must also minimally provide a
- <replaceable>MACHINE</replaceable>,
- <replaceable>KERNEL</replaceable>, or
- <replaceable>VM</replaceable> option.
- <literallayout class='monospaced'>
- $ runqemu ext4
- </literallayout>
- </para></listitem>
- <listitem><para>
- This example specifies to boot a virtual machine
- image (<filename>.wic.vmdk</filename> file).
- From the <filename>.wic.vmdk</filename>,
- <filename>runqemu</filename> determines the QEMU
- architecture (<replaceable>MACHINE</replaceable>) to be
- "qemux86-64" and the root filesystem type to be "vmdk".
- <literallayout class='monospaced'>
- $ runqemu /home/scott-lenovo/vm/core-image-minimal-qemux86-64.wic.vmdk
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='switching-between-consoles'>
- <title>Switching Between Consoles</title>
-
- <para>
- When booting or running QEMU, you can switch between
- supported consoles by using
- Ctrl+Alt+<replaceable>number</replaceable>.
- For example, Ctrl+Alt+3 switches you to the serial console
- as long as that console is enabled.
- Being able to switch consoles is helpful, for example, if
- the main QEMU console breaks for some reason.
- <note>
- Usually, "2" gets you to the main console and "3"
- gets you to the serial console.
- </note>
- </para>
- </section>
-
- <section id='removing-the-splash-screen'>
- <title>Removing the Splash Screen</title>
-
- <para>
- You can remove the splash screen when QEMU is booting by
- using Alt+left.
- Removing the splash screen allows you to see what is
- happening in the background.
- </para>
- </section>
-
- <section id='disabling-the-cursor-grab'>
- <title>Disabling the Cursor Grab</title>
-
- <para>
- The default QEMU integration captures the cursor within the
- main window.
- It does this since standard mouse devices only provide
- relative input and not absolute coordinates.
- You then have to break out of the grab using the "Ctrl+Alt"
- key combination.
- However, the Yocto Project's integration of QEMU enables
- the wacom USB touch pad driver by default to allow input
- of absolute coordinates.
- This default means that the mouse can enter and leave the
- main window without the grab taking effect leading to a
- better user experience.
- </para>
- </section>
-
- <section id='qemu-running-under-a-network-file-system-nfs-server'>
- <title>Running Under a Network File System (NFS) Server</title>
-
- <para>
- One method for running QEMU is to run it on an NFS server.
- This is useful when you need to access the same file system
- from both the build and the emulated system at the same time.
- It is also worth noting that the system does not need root
- privileges to run.
- It uses a user space NFS server to avoid that.
- Follow these steps to set up for running QEMU using an NFS
- server.
- <orderedlist>
- <listitem><para>
- <emphasis>Extract a Root Filesystem:</emphasis>
- Once you are able to run QEMU in your environment, you can
- use the <filename>runqemu-extract-sdk</filename> script,
- which is located in the <filename>scripts</filename>
- directory along with the <filename>runqemu</filename>
- script.</para>
-
- <para>The <filename>runqemu-extract-sdk</filename> takes a
- root filesystem tarball and extracts it into a location
- that you specify.
- Here is an example that takes a file system and
- extracts it to a directory named
- <filename>test-nfs</filename>:
- <literallayout class='monospaced'>
- runqemu-extract-sdk ./tmp/deploy/images/qemux86-64/core-image-sato-qemux86-64.tar.bz2 test-nfs
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Start QEMU:</emphasis>
- Once you have extracted the file system, you can run
- <filename>runqemu</filename> normally with the additional
- location of the file system.
- You can then also make changes to the files within
- <filename>./test-nfs</filename> and see those changes
- appear in the image in real time.
- Here is an example using the <filename>qemux86</filename>
- image:
- <literallayout class='monospaced'>
- runqemu qemux86-64 ./test-nfs
- </literallayout>
- </para></listitem>
- </orderedlist>
- <note>
- <para>
- Should you need to start, stop, or restart the NFS share,
- you can use the following commands:
- <itemizedlist>
- <listitem><para>
- The following command starts the NFS share:
- <literallayout class='monospaced'>
- runqemu-export-rootfs start <replaceable>file-system-location</replaceable>
- </literallayout>
- </para></listitem>
- <listitem><para>
- The following command stops the NFS share:
- <literallayout class='monospaced'>
- runqemu-export-rootfs stop <replaceable>file-system-location</replaceable>
- </literallayout>
- </para></listitem>
- <listitem><para>
- The following command restarts the NFS share:
- <literallayout class='monospaced'>
- runqemu-export-rootfs restart <replaceable>file-system-location</replaceable>
- </literallayout>
- </para></listitem>
- </itemizedlist>
- </para>
- </note>
- </para>
- </section>
-
- <section id='qemu-kvm-cpu-compatibility'>
- <title>QEMU CPU Compatibility Under KVM</title>
-
- <para>
- By default, the QEMU build compiles for and targets 64-bit and x86
- <trademark class='registered'>Intel</trademark> <trademark class='trademark'>Core</trademark>2
- Duo processors and 32-bit x86
- <trademark class='registered'>Intel</trademark> <trademark class='registered'>Pentium</trademark>
- II processors.
- QEMU builds for and targets these CPU types because they display
- a broad range of CPU feature compatibility with many commonly
- used CPUs.
- </para>
-
- <para>
- Despite this broad range of compatibility, the CPUs could support
- a feature that your host CPU does not support.
- Although this situation is not a problem when QEMU uses software
- emulation of the feature, it can be a problem when QEMU is
- running with KVM enabled.
- Specifically, software compiled with a certain CPU feature crashes
- when run on a CPU under KVM that does not support that feature.
- To work around this problem, you can override QEMU's runtime CPU
- setting by changing the <filename>QB_CPU_KVM</filename>
- variable in <filename>qemuboot.conf</filename> in the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory's</ulink>
- <filename>deploy/image</filename> directory.
- This setting specifies a <filename>-cpu</filename> option
- passed into QEMU in the <filename>runqemu</filename> script.
- Running <filename>qemu -cpu help</filename> returns a list of
- available supported CPU types.
- </para>
- </section>
-
- <section id='qemu-dev-performance'>
- <title>QEMU Performance</title>
-
- <para>
- Using QEMU to emulate your hardware can result in speed issues
- depending on the target and host architecture mix.
- For example, using the <filename>qemux86</filename> image in the
- emulator on an Intel-based 32-bit (x86) host machine is fast
- because the target and host architectures match.
- On the other hand, using the <filename>qemuarm</filename> image
- on the same Intel-based host can be slower.
- But, you still achieve faithful emulation of ARM-specific issues.
- </para>
-
- <para>
- To speed things up, the QEMU images support using
- <filename>distcc</filename> to call a cross-compiler outside the
- emulated system.
- If you used <filename>runqemu</filename> to start QEMU, and the
- <filename>distccd</filename> application is present on the host
- system, any BitBake cross-compiling toolchain available from the
- build system is automatically used from within QEMU simply by
- calling <filename>distcc</filename>.
- You can accomplish this by defining the cross-compiler variable
- (e.g. <filename>export CC="distcc"</filename>).
- Alternatively, if you are using a suitable SDK image or the
- appropriate stand-alone toolchain is present, the toolchain is
- also automatically used.
- <note>
- Several mechanisms exist that let you connect to the system
- running on the QEMU emulator:
- <itemizedlist>
- <listitem><para>
- QEMU provides a framebuffer interface that makes
- standard consoles available.
- </para></listitem>
- <listitem><para>
- Generally, headless embedded devices have a serial port.
- If so, you can configure the operating system of the
- running image to use that port to run a console.
- The connection uses standard IP networking.
- </para></listitem>
- <listitem><para>
- SSH servers exist in some QEMU images.
- The <filename>core-image-sato</filename> QEMU image
- has a Dropbear secure shell (SSH) server that runs
- with the root password disabled.
- The <filename>core-image-full-cmdline</filename> and
- <filename>core-image-lsb</filename> QEMU images
- have OpenSSH instead of Dropbear.
- Including these SSH servers allow you to use standard
- <filename>ssh</filename> and <filename>scp</filename>
- commands.
- The <filename>core-image-minimal</filename> QEMU image,
- however, contains no SSH server.
- </para></listitem>
- <listitem><para>
- You can use a provided, user-space NFS server to boot
- the QEMU session using a local copy of the root
- filesystem on the host.
- In order to make this connection, you must extract a
- root filesystem tarball by using the
- <filename>runqemu-extract-sdk</filename> command.
- After running the command, you must then point the
- <filename>runqemu</filename>
- script to the extracted directory instead of a root
- filesystem image file.
- See the
- "<link linkend='qemu-running-under-a-network-file-system-nfs-server'>Running Under a Network File System (NFS) Server</link>"
- section for more information.
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
- </section>
-
- <section id='qemu-dev-command-line-syntax'>
- <title>QEMU Command-Line Syntax</title>
-
- <para>
- The basic <filename>runqemu</filename> command syntax is as
- follows:
- <literallayout class='monospaced'>
- $ runqemu [<replaceable>option</replaceable> ] [...]
- </literallayout>
- Based on what you provide on the command line,
- <filename>runqemu</filename> does a good job of figuring out what
- you are trying to do.
- For example, by default, QEMU looks for the most recently built
- image according to the timestamp when it needs to look for an
- image.
- Minimally, through the use of options, you must provide either
- a machine name, a virtual machine image
- (<filename>*wic.vmdk</filename>), or a kernel image
- (<filename>*.bin</filename>).
- </para>
-
- <para>
- Following is the command-line help output for the
- <filename>runqemu</filename> command:
- <literallayout class='monospaced'>
- $ runqemu --help
-
- Usage: you can run this script with any valid combination
- of the following environment variables (in any order):
- KERNEL - the kernel image file to use
- ROOTFS - the rootfs image file or nfsroot directory to use
- MACHINE - the machine name (optional, autodetected from KERNEL filename if unspecified)
- Simplified QEMU command-line options can be passed with:
- nographic - disable video console
- serial - enable a serial console on /dev/ttyS0
- slirp - enable user networking, no root privileges is required
- kvm - enable KVM when running x86/x86_64 (VT-capable CPU required)
- kvm-vhost - enable KVM with vhost when running x86/x86_64 (VT-capable CPU required)
- publicvnc - enable a VNC server open to all hosts
- audio - enable audio
- [*/]ovmf* - OVMF firmware file or base name for booting with UEFI
- tcpserial=&lt;port&gt; - specify tcp serial port number
- biosdir=&lt;dir&gt; - specify custom bios dir
- biosfilename=&lt;filename&gt; - specify bios filename
- qemuparams=&lt;xyz&gt; - specify custom parameters to QEMU
- bootparams=&lt;xyz&gt; - specify custom kernel parameters during boot
- help, -h, --help: print this text
-
- Examples:
- runqemu
- runqemu qemuarm
- runqemu tmp/deploy/images/qemuarm
- runqemu tmp/deploy/images/qemux86/&lt;qemuboot.conf&gt;
- runqemu qemux86-64 core-image-sato ext4
- runqemu qemux86-64 wic-image-minimal wic
- runqemu path/to/bzImage-qemux86.bin path/to/nfsrootdir/ serial
- runqemu qemux86 iso/hddimg/wic.vmdk/wic.qcow2/wic.vdi/ramfs/cpio.gz...
- runqemu qemux86 qemuparams="-m 256"
- runqemu qemux86 bootparams="psplash=false"
- runqemu path/to/&lt;image&gt;-&lt;machine&gt;.wic
- runqemu path/to/&lt;image&gt;-&lt;machine&gt;.wic.vmdk
- </literallayout>
- </para>
- </section>
-
- <section id='qemu-dev-runqemu-command-line-options'>
- <title><filename>runqemu</filename> Command-Line Options</title>
-
- <para>
- Following is a description of <filename>runqemu</filename>
- options you can provide on the command line:
- <note><title>Tip</title>
- If you do provide some "illegal" option combination or perhaps
- you do not provide enough in the way of options,
- <filename>runqemu</filename> provides appropriate error
- messaging to help you correct the problem.
- </note>
- <itemizedlist>
- <listitem><para>
- <replaceable>QEMUARCH</replaceable>:
- The QEMU machine architecture, which must be "qemuarm",
- "qemuarm64", "qemumips", "qemumips64", "qemuppc",
- "qemux86", or "qemux86-64".
- </para></listitem>
- <listitem><para>
- <filename><replaceable>VM</replaceable></filename>:
- The virtual machine image, which must be a
- <filename>.wic.vmdk</filename> file.
- Use this option when you want to boot a
- <filename>.wic.vmdk</filename> image.
- The image filename you provide must contain one of the
- following strings: "qemux86-64", "qemux86", "qemuarm",
- "qemumips64", "qemumips", "qemuppc", or "qemush4".
- </para></listitem>
- <listitem><para>
- <replaceable>ROOTFS</replaceable>:
- A root filesystem that has one of the following
- filetype extensions: "ext2", "ext3", "ext4", "jffs2",
- "nfs", or "btrfs".
- If the filename you provide for this option uses “nfs”, it
- must provide an explicit root filesystem path.
- </para></listitem>
- <listitem><para>
- <replaceable>KERNEL</replaceable>:
- A kernel image, which is a <filename>.bin</filename> file.
- When you provide a <filename>.bin</filename> file,
- <filename>runqemu</filename> detects it and assumes the
- file is a kernel image.
- </para></listitem>
- <listitem><para>
- <replaceable>MACHINE</replaceable>:
- The architecture of the QEMU machine, which must be one
- of the following: "qemux86", "qemux86-64", "qemuarm",
- "qemuarm64", "qemumips", “qemumips64", or "qemuppc".
- The <replaceable>MACHINE</replaceable> and
- <replaceable>QEMUARCH</replaceable> options are basically
- identical.
- If you do not provide a <replaceable>MACHINE</replaceable>
- option, <filename>runqemu</filename> tries to determine
- it based on other options.
- </para></listitem>
- <listitem><para>
- <filename>ramfs</filename>:
- Indicates you are booting an initial RAM disk (initramfs)
- image, which means the <filename>FSTYPE</filename> is
- <filename>cpio.gz</filename>.
- </para></listitem>
- <listitem><para>
- <filename>iso</filename>:
- Indicates you are booting an ISO image, which means the
- <filename>FSTYPE</filename> is
- <filename>.iso</filename>.
- </para></listitem>
- <listitem><para>
- <filename>nographic</filename>:
- Disables the video console, which sets the console to
- "ttys0".
- This option is useful when you have logged into a server
- and you do not want to disable forwarding from the
- X Window System (X11) to your workstation or laptop.
- </para></listitem>
- <listitem><para>
- <filename>serial</filename>:
- Enables a serial console on
- <filename>/dev/ttyS0</filename>.
- </para></listitem>
- <listitem><para>
- <filename>biosdir</filename>:
- Establishes a custom directory for BIOS, VGA BIOS and
- keymaps.
- </para></listitem>
- <listitem><para>
- <filename>biosfilename</filename>:
- Establishes a custom BIOS name.
- </para></listitem>
- <listitem><para>
- <filename>qemuparams=\"<replaceable>xyz</replaceable>\"</filename>:
- Specifies custom QEMU parameters.
- Use this option to pass options other than the simple
- "kvm" and "serial" options.
- </para></listitem>
- <listitem><para><filename>bootparams=\"<replaceable>xyz</replaceable>\"</filename>:
- Specifies custom boot parameters for the kernel.
- </para></listitem>
- <listitem><para>
- <filename>audio</filename>:
- Enables audio in QEMU.
- The <replaceable>MACHINE</replaceable> option must be
- either "qemux86" or "qemux86-64" in order for audio to be
- enabled.
- Additionally, the <filename>snd_intel8x0</filename>
- or <filename>snd_ens1370</filename> driver must be
- installed in linux guest.
- </para></listitem>
- <listitem><para>
- <filename>slirp</filename>:
- Enables "slirp" networking, which is a different way
- of networking that does not need root access
- but also is not as easy to use or comprehensive
- as the default.
- </para></listitem>
- <listitem><para id='kvm-cond'>
- <filename>kvm</filename>:
- Enables KVM when running "qemux86" or "qemux86-64"
- QEMU architectures.
- For KVM to work, all the following conditions must be met:
- <itemizedlist>
- <listitem><para>
- Your <replaceable>MACHINE</replaceable> must be either
-qemux86" or "qemux86-64".
- </para></listitem>
- <listitem><para>
- Your build host has to have the KVM modules
- installed, which are
- <filename>/dev/kvm</filename>.
- </para></listitem>
- <listitem><para>
- The build host <filename>/dev/kvm</filename>
- directory has to be both writable and readable.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <filename>kvm-vhost</filename>:
- Enables KVM with VHOST support when running "qemux86"
- or "qemux86-64" QEMU architectures.
- For KVM with VHOST to work, the following conditions must
- be met:
- <itemizedlist>
- <listitem><para>
- <link linkend='kvm-cond'>kvm</link> option
- conditions must be met.
- </para></listitem>
- <listitem><para>
- Your build host has to have virtio net device, which
- are <filename>/dev/vhost-net</filename>.
- </para></listitem>
- <listitem><para>
- The build host <filename>/dev/vhost-net</filename>
- directory has to be either readable or writable
- and “slirp-enabled”.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <filename>publicvnc</filename>:
- Enables a VNC server open to all hosts.
- </para></listitem>
- </itemizedlist>
- </para>
- </section>
-</chapter>
-<!--
-vim: expandtab tw=80 ts=4
--->
diff --git a/documentation/dev-manual/dev-manual-start.xml b/documentation/dev-manual/dev-manual-start.xml
deleted file mode 100644
index 8cb5631f0d..0000000000
--- a/documentation/dev-manual/dev-manual-start.xml
+++ /dev/null
@@ -1,1287 +0,0 @@
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
-"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
-[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
-
-<chapter id='dev-manual-start'>
-
-<title>Setting Up to Use the Yocto Project</title>
-
-<para>
- This chapter provides guidance on how to prepare to use the
- Yocto Project.
- You can learn about creating a team environment that develops using the
- Yocto Project, how to set up a
- <ulink url='&YOCTO_DOCS_REF_URL;#hardware-build-system-term'>build host</ulink>,
- how to locate Yocto Project source repositories, and how to create local
- Git repositories.
-</para>
-
-<section id="usingpoky-changes-collaborate">
- <title>Creating a Team Development Environment</title>
-
- <para>
- It might not be immediately clear how you can use the Yocto
- Project in a team development environment, or how to scale it for a
- large team of developers.
- You can adapt the Yocto Project to many different use cases and
- scenarios;
- however, this flexibility could cause difficulties if you are trying
- to create a working setup that scales effectively.
- </para>
-
- <para>
- To help you understand how to set up this type of environment,
- this section presents a procedure that gives you information
- that can help you get the results you want.
- The procedure is high-level and presents some of the project's most
- successful experiences, practices, solutions, and available
- technologies that have proved to work well in the past;
- however, keep in mind, the procedure here is simply a starting point.
- You can build off these steps and customize the procedure to fit any
- particular working environment and set of practices.
- <orderedlist>
- <listitem><para>
- <emphasis>Determine Who is Going to be Developing:</emphasis>
- You first need to understand who is going to be doing anything
- related to the Yocto Project and determine their roles.
- Making this determination is essential to completing
- subsequent steps, which are to get your equipment together
- and set up your development environment's hardware topology.
- </para>
-
- <para>The following roles exist:
- <itemizedlist>
- <listitem><para>
- <emphasis>Application Developer:</emphasis>
- This type of developer does application level work
- on top of an existing software stack.
- </para></listitem>
- <listitem><para>
- <emphasis>Core System Developer:</emphasis>
- This type of developer works on the contents of the
- operating system image itself.
- </para></listitem>
- <listitem><para>
- <emphasis>Build Engineer:</emphasis>
- This type of developer manages Autobuilders and
- releases. Depending on the specifics of the environment,
- not all situations might need a Build Engineer.
- </para></listitem>
- <listitem><para>
- <emphasis>Test Engineer:</emphasis>
- This type of developer creates and manages automated
- tests that are used to ensure all application and
- core system development meets desired quality
- standards.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Gather the Hardware:</emphasis>
- Based on the size and make-up of the team, get the hardware
- together.
- Ideally, any development, build, or test engineer uses
- a system that runs a supported Linux distribution.
- These systems, in general, should be high performance
- (e.g. dual, six-core Xeons with 24 Gbytes of RAM and plenty
- of disk space).
- You can help ensure efficiency by having any machines used
- for testing or that run Autobuilders be as high performance
- as possible.
- <note>
- Given sufficient processing power, you might also consider
- building Yocto Project development containers to be run
- under Docker, which is described later.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Understand the Hardware Topology of the Environment:</emphasis>
- Once you understand the hardware involved and the make-up
- of the team, you can understand the hardware topology of the
- development environment.
- You can get a visual idea of the machines and their roles
- across the development environment.
-
-<!--
- The following figure shows a moderately sized Yocto Project
- development environment.
-
- <para role="writernotes">
- Need figure.</para>
--->
-
- </para></listitem>
- <listitem><para>
- <emphasis>Use Git as Your Source Control Manager (SCM):</emphasis>
- Keeping your
- <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>
- (i.e. recipes, configuration files, classes, and so forth)
- and any software you are developing under the control of an SCM
- system that is compatible with the OpenEmbedded build system
- is advisable.
- Of all of the SCMs supported by BitBake, the Yocto Project team strongly
- recommends using
- <ulink url='&YOCTO_DOCS_OM_URL;#git'>Git</ulink>.
- Git is a distributed system that is easy to back up,
- allows you to work remotely, and then connects back to the
- infrastructure.
- <note>
- For information about BitBake, see the
- <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
- </note></para>
-
- <para>It is relatively easy to set up Git services and create
- infrastructure like
- <ulink url='&YOCTO_GIT_URL;'>http://git.yoctoproject.org</ulink>,
- which is based on server software called
- <filename>gitolite</filename> with <filename>cgit</filename>
- being used to generate the web interface that lets you view the
- repositories.
- The <filename>gitolite</filename> software identifies users
- using SSH keys and allows branch-based access controls to
- repositories that you can control as little or as much as
- necessary.
- <note>
- The setup of these services is beyond the scope of this
- manual.
- However, sites such as the following exist that describe
- how to perform setup:
- <itemizedlist>
- <listitem><para>
- <ulink url='http://git-scm.com/book/ch4-8.html'>Git documentation</ulink>:
- Describes how to install
- <filename>gitolite</filename> on the server.
- </para></listitem>
- <listitem><para>
- <ulink url='http://gitolite.com'>Gitolite</ulink>:
- Information for <filename>gitolite</filename>.
- </para></listitem>
- <listitem><para>
- <ulink url='https://git.wiki.kernel.org/index.php/Interfaces,_frontends,_and_tools'>Interfaces, frontends, and tools</ulink>:
- Documentation on how to create interfaces and
- frontends for Git.
- </para></listitem>
- </itemizedlist>
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Set up the Application Development Machines:</emphasis>
- As mentioned earlier, application developers are creating
- applications on top of existing software stacks.
- Following are some best practices for setting up machines
- used for application development:
- <itemizedlist>
- <listitem><para>
- Use a pre-built toolchain that contains the software
- stack itself.
- Then, develop the application code on top of the
- stack.
- This method works well for small numbers of relatively
- isolated applications.
- </para></listitem>
- <listitem><para>
- Keep your cross-development toolchains updated.
- You can do this through provisioning either as new
- toolchain downloads or as updates through a package
- update mechanism using <filename>opkg</filename>
- to provide updates to an existing toolchain.
- The exact mechanics of how and when to do this depend
- on local policy.
- </para></listitem>
- <listitem><para>
- Use multiple toolchains installed locally into
- different locations to allow development across
- versions.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Set up the Core Development Machines:</emphasis>
- As mentioned earlier, core developers work on the contents of
- the operating system itself.
- Following are some best practices for setting up machines
- used for developing images:
- <itemizedlist>
- <listitem><para>
- Have the
- <ulink url='&YOCTO_DOCS_REF_URL;#build-system-term'>OpenEmbedded build system</ulink>
- available on the developer workstations so developers
- can run their own builds and directly rebuild the
- software stack.
- </para></listitem>
- <listitem><para>
- Keep the core system unchanged as much as
- possible and do your work in layers on top of the
- core system.
- Doing so gives you a greater level of portability when
- upgrading to new versions of the core system or Board
- Support Packages (BSPs).
- </para></listitem>
- <listitem><para>
- Share layers amongst the developers of a
- particular project and contain the policy configuration
- that defines the project.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Set up an Autobuilder:</emphasis>
- Autobuilders are often the core of the development
- environment.
- It is here that changes from individual developers are brought
- together and centrally tested.
- Based on this automated build and test environment, subsequent
- decisions about releases can be made.
- Autobuilders also allow for "continuous integration" style
- testing of software components and regression identification
- and tracking.</para>
-
- <para>See "<ulink url='http://autobuilder.yoctoproject.org'>Yocto Project Autobuilder</ulink>"
- for more information and links to buildbot.
- The Yocto Project team has found this implementation
- works well in this role.
- A public example of this is the Yocto Project
- Autobuilders, which the Yocto Project team uses to test the
- overall health of the project.</para>
-
- <para>The features of this system are:
- <itemizedlist>
- <listitem><para>
- Highlights when commits break the build.
- </para></listitem>
- <listitem><para>
- Populates an
- <ulink url='&YOCTO_DOCS_OM_URL;#shared-state-cache'>sstate cache</ulink>
- from which developers can pull rather than requiring
- local builds.
- </para></listitem>
- <listitem><para>
- Allows commit hook triggers, which trigger builds when
- commits are made.
- </para></listitem>
- <listitem><para>
- Allows triggering of automated image booting
- and testing under the QuickEMUlator (QEMU).
- </para></listitem>
- <listitem><para>
- Supports incremental build testing and
- from-scratch builds.
- </para></listitem>
- <listitem><para>
- Shares output that allows developer
- testing and historical regression investigation.
- </para></listitem>
- <listitem><para>
- Creates output that can be used for releases.
- </para></listitem>
- <listitem><para>
- Allows scheduling of builds so that resources
- can be used efficiently.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Set up Test Machines:</emphasis>
- Use a small number of shared, high performance systems
- for testing purposes.
- Developers can use these systems for wider, more
- extensive testing while they continue to develop
- locally using their primary development system.
- </para></listitem>
- <listitem><para>
- <emphasis>Document Policies and Change Flow:</emphasis>
- The Yocto Project uses a hierarchical structure and a
- pull model.
- Scripts exist to create and send pull requests
- (i.e. <filename>create-pull-request</filename> and
- <filename>send-pull-request</filename>).
- This model is in line with other open source projects where
- maintainers are responsible for specific areas of the project
- and a single maintainer handles the final "top-of-tree" merges.
- <note>
- You can also use a more collective push model.
- The <filename>gitolite</filename> software supports both the
- push and pull models quite easily.
- </note></para>
-
- <para>As with any development environment, it is important
- to document the policy used as well as any main project
- guidelines so they are understood by everyone.
- It is also a good idea to have well-structured
- commit messages, which are usually a part of a project's
- guidelines.
- Good commit messages are essential when looking back in time and
- trying to understand why changes were made.</para>
-
- <para>If you discover that changes are needed to the core
- layer of the project, it is worth sharing those with the
- community as soon as possible.
- Chances are if you have discovered the need for changes,
- someone else in the community needs them also.
- </para></listitem>
- <listitem><para>
- <emphasis>Development Environment Summary:</emphasis>
- Aside from the previous steps, some best practices exist
- within the Yocto Project development environment.
- Consider the following:
- <itemizedlist>
- <listitem><para>
- Use
- <ulink url='&YOCTO_DOCS_OM_URL;#git'>Git</ulink>
- as the source control system.
- </para></listitem>
- <listitem><para>
- Maintain your Metadata in layers that make sense
- for your situation.
- See the
- "<ulink url='&YOCTO_DOCS_OM_URL;#the-yocto-project-layer-model'>The Yocto Project Layer Model</ulink>"
- section in the Yocto Project Overview and Concepts
- Manual and the
- "<link linkend='understanding-and-creating-layers'>Understanding and Creating Layers</link>"
- section for more information on layers.
- </para></listitem>
- <listitem><para>
- Separate the project's Metadata and code by using
- separate Git repositories.
- See the
- "<ulink url='&YOCTO_DOCS_OM_URL;#yocto-project-repositories'>Yocto Project Source Repositories</ulink>"
- section in the Yocto Project Overview and Concepts
- Manual for information on these repositories.
- See the
- "<link linkend='locating-yocto-project-source-files'>Locating Yocto Project Source Files</link>"
- section for information on how to set up local Git
- repositories for related upstream Yocto Project
- Git repositories.
- </para></listitem>
- <listitem><para>
- Set up the directory for the shared state cache
- (<ulink url='&YOCTO_DOCS_REF_URL;#var-SSTATE_DIR'><filename>SSTATE_DIR</filename></ulink>)
- where it makes sense.
- For example, set up the sstate cache on a system used
- by developers in the same organization and share the
- same source directories on their machines.
- </para></listitem>
- <listitem><para>
- Set up an Autobuilder and have it populate the
- sstate cache and source directories.
- </para></listitem>
- <listitem><para>
- The Yocto Project community encourages you
- to send patches to the project to fix bugs or add
- features.
- If you do submit patches, follow the project commit
- guidelines for writing good commit messages.
- See the "<link linkend='how-to-submit-a-change'>Submitting a Change to the Yocto Project</link>"
- section.
- </para></listitem>
- <listitem><para>
- Send changes to the core sooner than later
- as others are likely to run into the same issues.
- For some guidance on mailing lists to use, see the list
- in the
- "<link linkend='how-to-submit-a-change'>Submitting a Change to the Yocto Project</link>"
- section.
- For a description of the available mailing lists, see
- the
- "<ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Mailing Lists</ulink>"
- section in the Yocto Project Reference Manual.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- </orderedlist>
- </para>
-</section>
-
-<section id='dev-preparing-the-build-host'>
- <title>Preparing the Build Host</title>
-
- <para>
- This section provides procedures to set up a system to be used as your
- <ulink url='&YOCTO_DOCS_REF_URL;#hardware-build-system-term'>build host</ulink>
- for development using the Yocto Project.
- Your build host can be a native Linux machine (recommended), it can
- be a machine (Linux, Mac, or Windows) that uses
- <ulink url='https://github.com/crops/poky-container'>CROPS</ulink>,
- which leverages
- <ulink url='https://www.docker.com/'>Docker Containers</ulink> or it can
- be a Windows machine capable of running Windows Subsystem For Linux v2 (WSL).
- <note>
- The Yocto Project is not compatible with
- <ulink url='https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux'>Windows Subsystem for Linux v1</ulink>.
- It is compatible but not officially supported nor validated with WSLv2.
- If you still decide to use WSL please upgrade to
- <ulink url='https://docs.microsoft.com/en-us/windows/wsl/wsl2-install'>WSLv2</ulink>.
- </note>
- </para>
-
- <para>
- Once your build host is set up to use the Yocto Project,
- further steps are necessary depending on what you want to
- accomplish.
- See the following references for information on how to prepare for
- Board Support Package (BSP) development and kernel development:
- <itemizedlist>
- <listitem><para>
- <emphasis>BSP Development:</emphasis>
- See the
- "<ulink url='&YOCTO_DOCS_BSP_URL;#preparing-your-build-host-to-work-with-bsp-layers'>Preparing Your Build Host to Work With BSP Layers</ulink>"
- section in the Yocto Project Board Support Package (BSP)
- Developer's Guide.
- </para></listitem>
- <listitem><para>
- <emphasis>Kernel Development:</emphasis>
- See the
- "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#preparing-the-build-host-to-work-on-the-kernel'>Preparing the Build Host to Work on the Kernel</ulink>"
- section in the Yocto Project Linux Kernel Development Manual.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <section id='setting-up-a-native-linux-host'>
- <title>Setting Up a Native Linux Host</title>
-
- <para>
- Follow these steps to prepare a native Linux machine as your
- Yocto Project Build Host:
- <orderedlist>
- <listitem><para>
- <emphasis>Use a Supported Linux Distribution:</emphasis>
- You should have a reasonably current Linux-based host
- system.
- You will have the best results with a recent release of
- Fedora, openSUSE, Debian, Ubuntu, RHEL or CentOS as these
- releases are frequently tested against the Yocto Project
- and officially supported.
- For a list of the distributions under validation and their
- status, see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>" section
- in the Yocto Project Reference Manual and the wiki page at
- <ulink url='&YOCTO_WIKI_URL;/wiki/Distribution_Support'>Distribution Support</ulink>.
- </para></listitem>
- <listitem><para>
- <emphasis>Have Enough Free Memory:</emphasis>
- Your system should have at least 50 Gbytes of free disk
- space for building images.
- </para></listitem>
- <listitem><para>
- <emphasis>Meet Minimal Version Requirements:</emphasis>
- The OpenEmbedded build system should be able to run on any
- modern distribution that has the following versions for
- Git, tar, Python and gcc.
- <itemizedlist>
- <listitem><para>
- Git 1.8.3.1 or greater
- </para></listitem>
- <listitem><para>
- tar 1.28 or greater
- </para></listitem>
- <listitem><para>
- Python 3.5.0 or greater.
- </para></listitem>
- <listitem><para>
- gcc 5.0 or greater.
- </para></listitem>
- </itemizedlist>
- If your build host does not meet any of these three listed
- version requirements, you can take steps to prepare the
- system so that you can still use the Yocto Project.
- See the
- "<ulink url='&YOCTO_DOCS_REF_URL;#required-git-tar-python-and-gcc-versions'>Required Git, tar, Python and gcc Versions</ulink>"
- section in the Yocto Project Reference Manual for
- information.
- </para></listitem>
- <listitem><para>
- <emphasis>Install Development Host Packages:</emphasis>
- Required development host packages vary depending on your
- build host and what you want to do with the Yocto
- Project.
- Collectively, the number of required packages is large
- if you want to be able to cover all cases.</para>
-
- <para>For lists of required packages for all scenarios,
- see the
- "<ulink url='&YOCTO_DOCS_REF_URL;#required-packages-for-the-build-host'>Required Packages for the Build Host</ulink>"
- section in the Yocto Project Reference Manual.
- </para></listitem>
- </orderedlist>
- Once you have completed the previous steps, you are ready to
- continue using a given development path on your native Linux
- machine.
- If you are going to use BitBake, see the
- "<link linkend='cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</link>"
- section.
- If you are going to use the Extensible SDK, see the
- "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-extensible'>Using the Extensible SDK</ulink>"
- Chapter in the Yocto Project Application Development and the
- Extensible Software Development Kit (eSDK) manual.
- If you want to work on the kernel, see the
- <ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;'>Yocto Project Linux Kernel Development Manual</ulink>.
- If you are going to use Toaster, see the
- "<ulink url='&YOCTO_DOCS_TOAST_URL;#toaster-manual-setup-and-use'>Setting Up and Using Toaster</ulink>"
- section in the Toaster User Manual.
- </para>
- </section>
-
- <section id='setting-up-to-use-crops'>
- <title>Setting Up to Use CROss PlatformS (CROPS)</title>
-
- <para>
- With
- <ulink url='https://github.com/crops/poky-container'>CROPS</ulink>,
- which leverages
- <ulink url='https://www.docker.com/'>Docker Containers</ulink>,
- you can create a Yocto Project development environment that
- is operating system agnostic.
- You can set up a container in which you can develop using the
- Yocto Project on a Windows, Mac, or Linux machine.
- </para>
-
- <para>
- Follow these general steps to prepare a Windows, Mac, or Linux
- machine as your Yocto Project build host:
- <orderedlist>
- <listitem><para>
- <emphasis>Determine What Your Build Host Needs:</emphasis>
- <ulink url='https://www.docker.com/what-docker'>Docker</ulink>
- is a software container platform that you need to install
- on the build host.
- Depending on your build host, you might have to install
- different software to support Docker containers.
- Go to the Docker installation page and read about the
- platform requirements in
- "<ulink url='https://docs.docker.com/install/#supported-platforms'>Supported Platforms</ulink>"
- your build host needs to run containers.
- </para></listitem>
- <listitem><para>
- <emphasis>Choose What To Install:</emphasis>
- Depending on whether or not your build host meets system
- requirements, you need to install "Docker CE Stable" or
- the "Docker Toolbox".
- Most situations call for Docker CE.
- However, if you have a build host that does not meet
- requirements (e.g. Pre-Windows 10 or Windows 10 "Home"
- version), you must install Docker Toolbox instead.
- </para></listitem>
- <listitem><para>
- <emphasis>Go to the Install Site for Your Platform:</emphasis>
- Click the link for the Docker edition associated with
- your build host's native software.
- For example, if your build host is running Microsoft
- Windows Version 10 and you want the Docker CE Stable
- edition, click that link under "Supported Platforms".
- </para></listitem>
- <listitem><para>
- <emphasis>Install the Software:</emphasis>
- Once you have understood all the pre-requisites, you can
- download and install the appropriate software.
- Follow the instructions for your specific machine and
- the type of the software you need to install:
- <itemizedlist>
- <listitem><para>
- Install
- <ulink url='https://docs.docker.com/docker-for-windows/install/#install-docker-for-windows-desktop-app'>Docker CE for Windows</ulink>
- for Windows build hosts that meet requirements.
- </para></listitem>
- <listitem><para>
- Install
- <ulink url='https://docs.docker.com/docker-for-mac/install/#install-and-run-docker-for-mac'>Docker CE for Macs</ulink>
- for Mac build hosts that meet requirements.
- </para></listitem>
- <listitem><para>
- Install
- <ulink url='https://docs.docker.com/toolbox/toolbox_install_windows/'>Docker Toolbox for Windows</ulink>
- for Windows build hosts that do not meet Docker
- requirements.
- </para></listitem>
- <listitem><para>
- Install
- <ulink url='https://docs.docker.com/toolbox/toolbox_install_mac/'>Docker Toolbox for MacOS</ulink>
- for Mac build hosts that do not meet Docker
- requirements.
- </para></listitem>
- <listitem><para>
- Install
- <ulink url='https://docs.docker.com/install/linux/docker-ce/centos/'>Docker CE for CentOS</ulink>
- for Linux build hosts running the CentOS
- distribution.
- </para></listitem>
- <listitem><para>
- Install
- <ulink url='https://docs.docker.com/install/linux/docker-ce/debian/'>Docker CE for Debian</ulink>
- for Linux build hosts running the Debian
- distribution.
- </para></listitem>
- <listitem><para>
- Install
- <ulink url='https://docs.docker.com/install/linux/docker-ce/fedora/'>Docker CE for Fedora</ulink>
- for Linux build hosts running the Fedora
- distribution.
- </para></listitem>
- <listitem><para>
- Install
- <ulink url='https://docs.docker.com/install/linux/docker-ce/ubuntu/'>Docker CE for Ubuntu</ulink>
- for Linux build hosts running the Ubuntu
- distribution.
- </para></listitem>
- </itemizedlist>
- </para></listitem>
- <listitem><para>
- <emphasis>Optionally Orient Yourself With Docker:</emphasis>
- If you are unfamiliar with Docker and the container
- concept, you can learn more here -
- <ulink url='https://docs.docker.com/get-started/'></ulink>.
- </para></listitem>
- <listitem><para>
- <emphasis>Launch Docker or Docker Toolbox:</emphasis>
- You should be able to launch Docker or the Docker Toolbox
- and have a terminal shell on your development host.
- </para></listitem>
- <listitem><para>
- <emphasis>Set Up the Containers to Use the Yocto Project:</emphasis>
- Go to
- <ulink url='https://github.com/crops/docker-win-mac-docs/wiki'></ulink>
- and follow the directions for your particular
- build host (i.e. Linux, Mac, or Windows).</para>
-
- <para>Once you complete the setup instructions for your
- machine, you have the Poky, Extensible SDK, and Toaster
- containers available.
- You can click those links from the page and learn more
- about using each of those containers.
- </para></listitem>
- </orderedlist>
- Once you have a container set up, everything is in place to
- develop just as if you were running on a native Linux machine.
- If you are going to use the Poky container, see the
- "<link linkend='cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</link>"
- section.
- If you are going to use the Extensible SDK container, see the
- "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-extensible'>Using the Extensible SDK</ulink>"
- Chapter in the Yocto Project Application Development and the
- Extensible Software Development Kit (eSDK) manual.
- If you are going to use the Toaster container, see the
- "<ulink url='&YOCTO_DOCS_TOAST_URL;#toaster-manual-setup-and-use'>Setting Up and Using Toaster</ulink>"
- section in the Toaster User Manual.
- </para>
- </section>
-
- <section id='setting-up-to-use-wsl'>
- <title>Setting Up to Use Windows Subsystem For Linux (WSLv2)</title>
-
- <para>
- With <ulink url='https://docs.microsoft.com/en-us/windows/wsl/wsl2-about'>
- Windows Subsystem for Linux (WSLv2)</ulink>, you can create a
- Yocto Project development environment that allows you to build
- on Windows. You can set up a Linux distribution inside Windows
- in which you can develop using the Yocto Project.
- </para>
-
- <para>
- Follow these general steps to prepare a Windows machine using WSLv2
- as your Yocto Project build host:
- <orderedlist>
- <listitem><para>
- <emphasis>Make sure your Windows 10 machine is capable of running WSLv2:</emphasis>
-
- WSLv2 is only available for Windows 10 builds > 18917. To
- check which build version you are running, you may open a
- command prompt on Windows and execute the command "ver".
- <literallayout class='monospaced'>
- C:\Users\myuser> ver
-
- Microsoft Windows [Version 10.0.19041.153]
- </literallayout>
- If your build is capable of running WSLv2 you may continue,
- for more information on this subject or instructions on how
- to upgrade to WSLv2 visit <ulink url='https://docs.microsoft.com/en-us/windows/wsl/wsl2-install'>Windows 10 WSLv2</ulink>
- </para></listitem>
- <listitem><para>
- <emphasis>Install the Linux distribution of your choice inside Windows 10:</emphasis>
- Once you know your version of Windows 10 supports WSLv2,
- you can install the distribution of your choice from the
- Microsoft Store.
- Open the Microsoft Store and search for Linux. While there
- are several Linux distributions available, the assumption
- is that your pick will be one of the distributions supported
- by the Yocto Project as stated on the instructions for
- using a native Linux host.
- After making your selection, simply click "Get" to download
- and install the distribution.
- </para></listitem>
- <listitem><para>
- <emphasis>Check your Linux distribution is using WSLv2:</emphasis>
- Open a Windows PowerShell and run:
- <literallayout class='monospaced'>
- C:\WINDOWS\system32> wsl -l -v
- NAME STATE VERSION
- *Ubuntu Running 2
- </literallayout>
- Note the version column which says the WSL version being used by
- your distribution, on compatible systems, this can be changed back
- at any point in time.
- </para></listitem>
- <listitem><para>
- <emphasis>Optionally Orient Yourself on WSL:</emphasis>
- If you are unfamiliar with WSL, you can learn more here -
- <ulink url='https://docs.microsoft.com/en-us/windows/wsl/wsl2-about'></ulink>.
- </para></listitem>
- <listitem><para>
- <emphasis>Launch your WSL Distibution:</emphasis>
- From the Windows start menu simply launch your WSL distribution
- just like any other application.
- </para></listitem>
- <listitem><para>
- <emphasis>Optimize your WSLv2 storage often:</emphasis>
- Due to the way storage is handled on WSLv2, the storage
- space used by the undelying Linux distribution is not
- reflected immedately, and since bitbake heavily uses
- storage, after several builds, you may be unaware you
- are running out of space. WSLv2 uses a VHDX file for
- storage, this issue can be easily avoided by manually
- optimizing this file often, this can be done in the
- following way:
- <orderedlist>
- <listitem><para>
- <emphasis>Find the location of your VHDX file:</emphasis>
- First you need to find the distro app package directory,
- to achieve this open a Windows Powershell as Administrator
- and run:
- <literallayout class='monospaced'>
- C:\WINDOWS\system32> Get-AppxPackage -Name "*Ubuntu*" | Select PackageFamilyName
- PackageFamilyName
- -----------------
- CanonicalGroupLimited.UbuntuonWindows_79abcdefgh
- </literallayout>
- You should now replace the <replaceable>PackageFamilyName</replaceable>
- and your <replaceable>user</replaceable> on the following
- path to find your VHDX file: <filename>C:\Users\user\AppData\Local\Packages\PackageFamilyName\LocalState\</filename>
- For example:
- <literallayout class='monospaced'>
- ls C:\Users\myuser\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79abcdefgh\LocalState\
- Mode LastWriteTime Length Name
- -a---- 3/14/2020 9:52 PM 57418973184 ext4.vhdx
- </literallayout>
- Your VHDX file path is: <filename>C:\Users\myuser\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79abcdefgh\LocalState\ext4.vhdx</filename>
- </para></listitem>
- <listitem><para><emphasis>Optimize your VHDX file:</emphasis>
- Open a Windows Powershell as Administrator to optimize
- your VHDX file, shutting down WSL first:
- <literallayout class='monospaced'>
- C:\WINDOWS\system32> wsl --shutdown
- C:\WINDOWS\system32> optimize-vhd -Path C:\Users\myuser\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79abcdefgh\LocalState\ext4.vhdx -Mode full
- </literallayout>
- A progress bar should be shown while optimizing the VHDX file,
- and storage should now be reflected correctly on the Windows
- Explorer.
- </para></listitem>
- </orderedlist>
- </para></listitem>
- </orderedlist>
- <note>
- The current implementation of WSLv2 does not have out-of-the-box
- access to external devices such as those connected through a
- USB port, but it automatically mounts your <filename>C:</filename>
- drive on <filename>/mnt/c/</filename> (and others), which
- you can use to share deploy artifacts to be later flashed on
- hardware through Windows, but your build directory should not
- reside inside this mountpoint.
- </note>
- Once you have WSLv2 set up, everything is in place to
- develop just as if you were running on a native Linux machine.
- If you are going to use the Extensible SDK container, see the
- "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-extensible'>Using the Extensible SDK</ulink>"
- Chapter in the Yocto Project Application Development and the
- Extensible Software Development Kit (eSDK) manual.
- If you are going to use the Toaster container, see the
- "<ulink url='&YOCTO_DOCS_TOAST_URL;#toaster-manual-setup-and-use'>Setting Up and Using Toaster</ulink>"
- section in the Toaster User Manual.
- </para>
- </section>
-</section>
-
-<section id='locating-yocto-project-source-files'>
- <title>Locating Yocto Project Source Files</title>
-
- <para>
- This section shows you how to locate, fetch and configure the source
- files you'll need to work with the Yocto Project.
- <note><title>Notes</title>
- <itemizedlist>
- <listitem><para>
- For concepts and introductory information about Git as it
- is used in the Yocto Project, see the
- "<ulink url='&YOCTO_DOCS_OM_URL;#git'>Git</ulink>"
- section in the Yocto Project Overview and Concepts Manual.
- </para></listitem>
- <listitem><para>
- For concepts on Yocto Project source repositories, see the
- "<ulink url='&YOCTO_DOCS_OM_URL;#yocto-project-repositories'>Yocto Project Source Repositories</ulink>"
- section in the Yocto Project Overview and Concepts Manual."
- </para></listitem>
- </itemizedlist>
- </note>
- </para>
-
- <section id='accessing-source-repositories'>
- <title>Accessing Source Repositories</title>
-
- <para>
- Working from a copy of the upstream Yocto Project
- <ulink url='&YOCTO_DOCS_OM_URL;#source-repositories'>Source Repositories</ulink>
- is the preferred method for obtaining and using a Yocto Project
- release.
- You can view the Yocto Project Source Repositories at
- <ulink url='&YOCTO_GIT_URL;'></ulink>.
- In particular, you can find the
- <filename>poky</filename> repository at
- <ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/poky/'></ulink>.
- </para>
-
- <para>
- Use the following procedure to locate the latest upstream copy of
- the <filename>poky</filename> Git repository:
- <orderedlist>
- <listitem><para>
- <emphasis>Access Repositories:</emphasis>
- Open a browser and go to
- <ulink url='&YOCTO_GIT_URL;'></ulink> to access the
- GUI-based interface into the Yocto Project source
- repositories.
- </para></listitem>
- <listitem><para>
- <emphasis>Select the Repository:</emphasis>
- Click on the repository in which you are interested (e.g.
- <filename>poky</filename>).
- </para></listitem>
- <listitem><para>
- <emphasis>Find the URL Used to Clone the Repository:</emphasis>
- At the bottom of the page, note the URL used to
- <ulink url='&YOCTO_DOCS_OM_URL;#git-commands-clone'>clone</ulink>
- that repository (e.g.
- <filename>&YOCTO_GIT_URL;/poky</filename>).
- <note>
- For information on cloning a repository, see the
- "<link linkend='cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</link>"
- section.
- </note>
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='accessing-index-of-releases'>
- <title>Accessing Index of Releases</title>
-
- <para>
- Yocto Project maintains an Index of Releases area that contains
- related files that contribute to the Yocto Project.
- Rather than Git repositories, these files are tarballs that
- represent snapshots in time of a given component.
- <note><title>Tip</title>
- The recommended method for accessing Yocto Project
- components is to use Git to clone the upstream repository and
- work from within that locally cloned repository.
- The procedure in this section exists should you desire a
- tarball snapshot of any given component.
- </note>
- Follow these steps to locate and download a particular tarball:
- <orderedlist>
- <listitem><para>
- <emphasis>Access the Index of Releases:</emphasis>
- Open a browser and go to
- <ulink url='&YOCTO_DL_URL;/releases'></ulink> to access the
- Index of Releases.
- The list represents released components (e.g.
- <filename>bitbake</filename>,
- <filename>sato</filename>, and so on).
- <note>
- The <filename>yocto</filename> directory contains the
- full array of released Poky tarballs.
- The <filename>poky</filename> directory in the
- Index of Releases was historically used for very
- early releases and exists now only for retroactive
- completeness.
- </note>
- </para></listitem>
- <listitem><para>
- <emphasis>Select a Component:</emphasis>
- Click on any released component in which you are interested
- (e.g. <filename>yocto</filename>).
- </para></listitem>
- <listitem><para>
- <emphasis>Find the Tarball:</emphasis>
- Drill down to find the associated tarball.
- For example, click on <filename>yocto-&DISTRO;</filename> to
- view files associated with the Yocto Project &DISTRO;
- release (e.g. <filename>poky-&DISTRO_NAME_NO_CAP;-&POKYVERSION;.tar.bz2</filename>,
- which is the released Poky tarball).
- </para></listitem>
- <listitem><para>
- <emphasis>Download the Tarball:</emphasis>
- Click the tarball to download and save a snapshot of the
- given component.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='using-the-downloads-page'>
- <title>Using the Downloads Page</title>
-
- <para>
- The
- <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>
- uses a "DOWNLOADS" page from which you can locate and download
- tarballs of any Yocto Project release.
- Rather than Git repositories, these files represent snapshot
- tarballs similar to the tarballs located in the Index of Releases
- described in the
- "<link linkend='accessing-index-of-releases'>Accessing Index of Releases</link>"
- section.
- <note><title>Tip</title>
- The recommended method for accessing Yocto Project
- components is to use Git to clone a repository and work from
- within that local repository.
- The procedure in this section exists should you desire a
- tarball snapshot of any given component.
- </note>
- <orderedlist>
- <listitem><para>
- <emphasis>Go to the Yocto Project Website:</emphasis>
- Open The
- <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>
- in your browser.
- </para></listitem>
- <listitem><para>
- <emphasis>Get to the Downloads Area:</emphasis>
- Select the "DOWNLOADS" item from the pull-down
- "SOFTWARE" tab menu near the top of the page.
- </para></listitem>
- <listitem><para>
- <emphasis>Select a Yocto Project Release:</emphasis>
- Use the menu next to "RELEASE" to display and choose
- a recent or past supported Yocto Project release
- (e.g. &DISTRO_NAME_NO_CAP;,
- &DISTRO_NAME_NO_CAP_MINUS_ONE;, and so forth).
- <note><title>Tip</title>
- For a "map" of Yocto Project releases to version
- numbers, see the
- <ulink url='https://wiki.yoctoproject.org/wiki/Releases'>Releases</ulink>
- wiki page.
- </note>
- You can use the "RELEASE ARCHIVE" link to reveal a menu of
- all Yocto Project releases.
- </para></listitem>
- <listitem><para>
- <emphasis>Download Tools or Board Support Packages (BSPs):</emphasis>
- From the "DOWNLOADS" page, you can download tools or
- BSPs as well.
- Just scroll down the page and look for what you need.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='accessing-nightly-builds'>
- <title>Accessing Nightly Builds</title>
-
- <para>
- Yocto Project maintains an area for nightly builds that contains
- tarball releases at <ulink url='&YOCTO_AB_NIGHTLY_URL;'/>.
- These builds include Yocto Project releases ("poky"),
- toolchains, and builds for supported machines.
- </para>
-
- <para>
- Should you ever want to access a nightly build of a particular
- Yocto Project component, use the following procedure:
- <orderedlist>
- <listitem><para>
- <emphasis>Locate the Index of Nightly Builds:</emphasis>
- Open a browser and go to
- <ulink url='&YOCTO_AB_NIGHTLY_URL;'/> to access the
- Nightly Builds.
- </para></listitem>
- <listitem><para>
- <emphasis>Select a Date:</emphasis>
- Click on the date in which you are interested.
- If you want the latest builds, use "CURRENT".
- </para></listitem>
- <listitem><para>
- <emphasis>Select a Build:</emphasis>
- Choose the area in which you are interested.
- For example, if you are looking for the most recent
- toolchains, select the "toolchain" link.
- </para></listitem>
- <listitem><para>
- <emphasis>Find the Tarball:</emphasis>
- Drill down to find the associated tarball.
- </para></listitem>
- <listitem><para>
- <emphasis>Download the Tarball:</emphasis>
- Click the tarball to download and save a snapshot of the
- given component.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-</section>
-
-<section id='cloning-and-checking-out-branches'>
- <title>Cloning and Checking Out Branches</title>
-
- <para>
- To use the Yocto Project for development, you need a release locally
- installed on your development system.
- This locally installed set of files is referred to as the
- <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
- in the Yocto Project documentation.
- </para>
-
- <para>
- The preferred method of creating your Source Directory is by using
- <ulink url='&YOCTO_DOCS_OM_URL;#git'>Git</ulink> to clone a local
- copy of the upstream <filename>poky</filename> repository.
- Working from a cloned copy of the upstream repository allows you
- to contribute back into the Yocto Project or to simply work with
- the latest software on a development branch.
- Because Git maintains and creates an upstream repository with
- a complete history of changes and you are working with a local
- clone of that repository, you have access to all the Yocto
- Project development branches and tag names used in the upstream
- repository.
- </para>
-
- <section id='cloning-the-poky-repository'>
- <title>Cloning the <filename>poky</filename> Repository</title>
-
- <para>
- Follow these steps to create a local version of the
- upstream
- <ulink url='&YOCTO_DOCS_REF_URL;#poky'><filename>poky</filename></ulink>
- Git repository.
- <orderedlist>
- <listitem><para>
- <emphasis>Set Your Directory:</emphasis>
- Change your working directory to where you want to
- create your local copy of
- <filename>poky</filename>.
- </para></listitem>
- <listitem><para>
- <emphasis>Clone the Repository:</emphasis>
- The following example command clones the
- <filename>poky</filename> repository and uses
- the default name "poky" for your local repository:
- <literallayout class='monospaced'>
- $ git clone git://git.yoctoproject.org/poky
- Cloning into 'poky'...
- remote: Counting objects: 432160, done.
- remote: Compressing objects: 100% (102056/102056), done.
- remote: Total 432160 (delta 323116), reused 432037 (delta 323000)
- Receiving objects: 100% (432160/432160), 153.81 MiB | 8.54 MiB/s, done.
- Resolving deltas: 100% (323116/323116), done.
- Checking connectivity... done.
- </literallayout>
- Unless you specify a specific development branch or
- tag name, Git clones the "master" branch, which results
- in a snapshot of the latest development changes for
- "master".
- For information on how to check out a specific
- development branch or on how to check out a local
- branch based on a tag name, see the
- "<link linkend='checking-out-by-branch-in-poky'>Checking Out By Branch in Poky</link>"
- and
- <link linkend='checkout-out-by-tag-in-poky'>Checking Out By Tag in Poky</link>"
- sections, respectively.</para>
-
- <para>Once the local repository is created, you can
- change to that directory and check its status.
- Here, the single "master" branch exists on your system
- and by default, it is checked out:
- <literallayout class='monospaced'>
- $ cd ~/poky
- $ git status
- On branch master
- Your branch is up-to-date with 'origin/master'.
- nothing to commit, working directory clean
- $ git branch
- * master
- </literallayout>
- Your local repository of poky is identical to the
- upstream poky repository at the time from which it was
- cloned.
- As you work with the local branch, you can periodically
- use the <filename>git pull &dash;&dash;rebase</filename>
- command to be sure you are up-to-date with the upstream
- branch.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='checking-out-by-branch-in-poky'>
- <title>Checking Out by Branch in Poky</title>
-
- <para>
- When you clone the upstream poky repository, you have access to
- all its development branches.
- Each development branch in a repository is unique as it forks
- off the "master" branch.
- To see and use the files of a particular development branch
- locally, you need to know the branch name and then specifically
- check out that development branch.
- <note>
- Checking out an active development branch by branch name
- gives you a snapshot of that particular branch at the time
- you check it out.
- Further development on top of the branch that occurs after
- check it out can occur.
- </note>
- <orderedlist>
- <listitem><para>
- <emphasis>Switch to the Poky Directory:</emphasis>
- If you have a local poky Git repository, switch to that
- directory.
- If you do not have the local copy of poky, see the
- "<link linkend='cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</link>"
- section.
- </para></listitem>
- <listitem><para>
- <emphasis>Determine Existing Branch Names:</emphasis>
- <literallayout class='monospaced'>
- $ git branch -a
- * master
- remotes/origin/1.1_M1
- remotes/origin/1.1_M2
- remotes/origin/1.1_M3
- remotes/origin/1.1_M4
- remotes/origin/1.2_M1
- remotes/origin/1.2_M2
- remotes/origin/1.2_M3
- .
- .
- .
- remotes/origin/thud
- remotes/origin/thud-next
- remotes/origin/warrior
- remotes/origin/warrior-next
- remotes/origin/zeus
- remotes/origin/zeus-next
- ... and so on ...
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Check out the Branch:</emphasis>
- Check out the development branch in which you want to work.
- For example, to access the files for the Yocto Project
- &DISTRO; Release (&DISTRO_NAME;), use the following command:
- <literallayout class='monospaced'>
- $ git checkout -b &DISTRO_NAME_NO_CAP; origin/&DISTRO_NAME_NO_CAP;
- Branch &DISTRO_NAME_NO_CAP; set up to track remote branch &DISTRO_NAME_NO_CAP; from origin.
- Switched to a new branch '&DISTRO_NAME_NO_CAP;'
- </literallayout>
- The previous command checks out the "&DISTRO_NAME_NO_CAP;"
- development branch and reports that the branch is tracking
- the upstream "origin/&DISTRO_NAME_NO_CAP;" branch.</para>
-
- <para>The following command displays the branches
- that are now part of your local poky repository.
- The asterisk character indicates the branch that is
- currently checked out for work:
- <literallayout class='monospaced'>
- $ git branch
- master
- * &DISTRO_NAME_NO_CAP;
- </literallayout>
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-
- <section id='checkout-out-by-tag-in-poky'>
- <title>Checking Out by Tag in Poky</title>
-
- <para>
- Similar to branches, the upstream repository uses tags
- to mark specific commits associated with significant points in
- a development branch (i.e. a release point or stage of a
- release).
- You might want to set up a local branch based on one of those
- points in the repository.
- The process is similar to checking out by branch name except you
- use tag names.
- <note>
- Checking out a branch based on a tag gives you a
- stable set of files not affected by development on the
- branch above the tag.
- </note>
- <orderedlist>
- <listitem><para>
- <emphasis>Switch to the Poky Directory:</emphasis>
- If you have a local poky Git repository, switch to that
- directory.
- If you do not have the local copy of poky, see the
- "<link linkend='cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</link>"
- section.
- </para></listitem>
- <listitem><para>
- <emphasis>Fetch the Tag Names:</emphasis>
- To checkout the branch based on a tag name, you need to
- fetch the upstream tags into your local repository:
- <literallayout class='monospaced'>
- $ git fetch --tags
- $
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>List the Tag Names:</emphasis>
- You can list the tag names now:
- <literallayout class='monospaced'>
- $ git tag
- 1.1_M1.final
- 1.1_M1.rc1
- 1.1_M1.rc2
- 1.1_M2.final
- 1.1_M2.rc1
- .
- .
- .
- yocto-2.5
- yocto-2.5.1
- yocto-2.5.2
- yocto-2.5.3
- yocto-2.6
- yocto-2.6.1
- yocto-2.6.2
- yocto-2.7
- yocto_1.5_M5.rc8
- </literallayout>
- </para></listitem>
- <listitem><para>
- <emphasis>Check out the Branch:</emphasis>
- <literallayout class='monospaced'>
- $ git checkout tags/&DISTRO_REL_TAG; -b my_yocto_&DISTRO;
- Switched to a new branch 'my_yocto_&DISTRO;'
- $ git branch
- master
- * my_yocto_&DISTRO;
- </literallayout>
- The previous command creates and checks out a local
- branch named "my_yocto_&DISTRO;", which is based on
- the commit in the upstream poky repository that has
- the same tag.
- In this example, the files you have available locally
- as a result of the <filename>checkout</filename>
- command are a snapshot of the
- "&DISTRO_NAME_NO_CAP;" development branch at the point
- where Yocto Project &DISTRO; was released.
- </para></listitem>
- </orderedlist>
- </para>
- </section>
-</section>
-
-</chapter>
-<!--
-vim: expandtab tw=80 ts=4
--->
diff --git a/documentation/dev-manual/dev-manual.xml b/documentation/dev-manual/dev-manual.xml
deleted file mode 100755
index 6f86454ede..0000000000
--- a/documentation/dev-manual/dev-manual.xml
+++ /dev/null
@@ -1,194 +0,0 @@
-<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
-"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
-[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
-
-<book id='dev-manual' lang='en'
- xmlns:xi="http://www.w3.org/2003/XInclude"
- xmlns="http://docbook.org/ns/docbook"
- >
- <bookinfo>
-
- <mediaobject>
- <imageobject>
- <imagedata fileref='figures/dev-title.png'
- format='SVG'
- align='left' scalefit='1' width='100%'/>
- </imageobject>
- </mediaobject>
-
- <title>
- Yocto Project Development Tasks Manual
- </title>
-
- <authorgroup>
- <author>
- <affiliation>
- <orgname>&ORGNAME;</orgname>
- </affiliation>
- <email>&ORGEMAIL;</email>
- </author>
- </authorgroup>
-
- <revhistory>
- <revision>
- <revnumber>1.1</revnumber>
- <date>October 2011</date>
- <revremark>The initial document released with the Yocto Project 1.1 Release.</revremark>
- </revision>
- <revision>
- <revnumber>1.2</revnumber>
- <date>April 2012</date>
- <revremark>Released with the Yocto Project 1.2 Release.</revremark>
- </revision>
- <revision>
- <revnumber>1.3</revnumber>
- <date>October 2012</date>
- <revremark>Released with the Yocto Project 1.3 Release.</revremark>
- </revision>
- <revision>
- <revnumber>1.4</revnumber>
- <date>April 2013</date>
- <revremark>Released with the Yocto Project 1.4 Release.</revremark>
- </revision>
- <revision>
- <revnumber>1.5</revnumber>
- <date>October 2013</date>
- <revremark>Released with the Yocto Project 1.5 Release.</revremark>
- </revision>
- <revision>
- <revnumber>1.6</revnumber>
- <date>April 2014</date>
- <revremark>Released with the Yocto Project 1.6 Release.</revremark>
- </revision>
- <revision>
- <revnumber>1.7</revnumber>
- <date>October 2014</date>
- <revremark>Released with the Yocto Project 1.7 Release.</revremark>
- </revision>
- <revision>
- <revnumber>1.8</revnumber>
- <date>April 2015</date>
- <revremark>Released with the Yocto Project 1.8 Release.</revremark>
- </revision>
- <revision>
- <revnumber>2.0</revnumber>
- <date>October 2015</date>
- <revremark>Released with the Yocto Project 2.0 Release.</revremark>
- </revision>
- <revision>
- <revnumber>2.1</revnumber>
- <date>April 2016</date>
- <revremark>Released with the Yocto Project 2.1 Release.</revremark>
- </revision>
- <revision>
- <revnumber>2.2</revnumber>
- <date>October 2016</date>
- <revremark>Released with the Yocto Project 2.2 Release.</revremark>
- </revision>
- <revision>
- <revnumber>2.3</revnumber>
- <date>May 2017</date>
- <revremark>Released with the Yocto Project 2.3 Release.</revremark>
- </revision>
- <revision>
- <revnumber>2.4</revnumber>
- <date>October 2017</date>
- <revremark>Released with the Yocto Project 2.4 Release.</revremark>
- </revision>
- <revision>
- <revnumber>2.5</revnumber>
- <date>May 2018</date>
- <revremark>Released with the Yocto Project 2.5 Release.</revremark>
- </revision>
- <revision>
- <revnumber>2.6</revnumber>
- <date>November 2018</date>
- <revremark>Released with the Yocto Project 2.6 Release.</revremark>
- </revision>
- <revision>
- <revnumber>2.7</revnumber>
- <date>May 2019</date>
- <revremark>Released with the Yocto Project 2.7 Release.</revremark>
- </revision>
- <revision>
- <revnumber>3.0</revnumber>
- <date>October 2019</date>
- <revremark>Released with the Yocto Project 3.0 Release.</revremark>
- </revision>
- <revision>
- <revnumber>3.1</revnumber>
- <date>&REL_MONTH_YEAR;</date>
- <revremark>Released with the Yocto Project 3.1 Release.</revremark>
- </revision>
- </revhistory>
-
- <copyright>
- <year>&COPYRIGHT_YEAR;</year>
- <holder>Linux Foundation</holder>
- </copyright>
-
- <legalnotice>
- <para>
- Permission is granted to copy, distribute and/or modify this document under
- the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-sa/2.0/uk/">
- Creative Commons Attribution-Share Alike 2.0 UK: England &amp; Wales</ulink> as published by
- Creative Commons.
- </para>
- <note><title>Manual Notes</title>
- <itemizedlist>
- <listitem><para>
- This version of the
- <emphasis>Yocto Project Development Tasks Manual</emphasis>
- is for the &YOCTO_DOC_VERSION; release of the
- Yocto Project.
- To be sure you have the latest version of the manual
- for this release, go to the
- <ulink url='&YOCTO_DOCS_URL;'>Yocto Project documentation page</ulink>
- and select the manual from that site.
- Manuals from the site are more up-to-date than manuals
- derived from the Yocto Project released TAR files.
- </para></listitem>
- <listitem><para>
- If you located this manual through a web search, the
- version of the manual might not be the one you want
- (e.g. the search might have returned a manual much
- older than the Yocto Project version with which you
- are working).
- You can see all Yocto Project major releases by
- visiting the
- <ulink url='&YOCTO_WIKI_URL;/wiki/Releases'>Releases</ulink>
- page.
- If you need a version of this manual for a different
- Yocto Project release, visit the
- <ulink url='&YOCTO_DOCS_URL;'>Yocto Project documentation page</ulink>
- and select the manual set by using the
- "ACTIVE RELEASES DOCUMENTATION" or "DOCUMENTS ARCHIVE"
- pull-down menus.
- </para></listitem>
- <listitem>
- <para>
- To report any inaccuracies or problems with this
- (or any other Yocto Project) manual, send an email to
- the Yocto Project documentation mailing list at
- <filename>docs@lists.yoctoproject.org</filename> or
- log into the freenode <filename>#yocto</filename> channel.
- </para>
- </listitem>
- </itemizedlist>
- </note>
- </legalnotice>
-
- </bookinfo>
-
- <xi:include href="dev-manual-intro.xml"/>
-
- <xi:include href="dev-manual-start.xml"/>
-
- <xi:include href="dev-manual-common-tasks.xml"/>
-
- <xi:include href="dev-manual-qemu.xml"/>
-
-</book>
-<!--
-vim: expandtab tw=80 ts=4
--->
diff --git a/documentation/dev-manual/dev-style.css b/documentation/dev-manual/dev-style.css
deleted file mode 100644
index 6d0aa8e9fa..0000000000
--- a/documentation/dev-manual/dev-style.css
+++ /dev/null
@@ -1,988 +0,0 @@
-/*
- Generic XHTML / DocBook XHTML CSS Stylesheet.
-
- Browser wrangling and typographic design by
- Oyvind Kolas / pippin@gimp.org
-
- Customised for Poky by
- Matthew Allum / mallum@o-hand.com
-
- Thanks to:
- Liam R. E. Quin
- William Skaggs
- Jakub Steiner
-
- Structure
- ---------
-
- The stylesheet is divided into the following sections:
-
- Positioning
- Margins, paddings, width, font-size, clearing.
- Decorations
- Borders, style
- Colors
- Colors
- Graphics
- Graphical backgrounds
- Nasty IE tweaks
- Workarounds needed to make it work in internet explorer,
- currently makes the stylesheet non validating, but up until
- this point it is validating.
- Mozilla extensions
- Transparency for footer
- Rounded corners on boxes
-
-*/
-
-
- /*************** /
- / Positioning /
-/ ***************/
-
-body {
- font-family: Verdana, Sans, sans-serif;
-
- min-width: 640px;
- width: 80%;
- margin: 0em auto;
- padding: 2em 5em 5em 5em;
- color: #333;
-}
-
-h1,h2,h3,h4,h5,h6,h7 {
- font-family: Arial, Sans;
- color: #00557D;
- clear: both;
-}
-
-h1 {
- font-size: 2em;
- text-align: left;
- padding: 0em 0em 0em 0em;
- margin: 2em 0em 0em 0em;
-}
-
-h2.subtitle {
- margin: 0.10em 0em 3.0em 0em;
- padding: 0em 0em 0em 0em;
- font-size: 1.8em;
- padding-left: 20%;
- font-weight: normal;
- font-style: italic;
-}
-
-h2 {
- margin: 2em 0em 0.66em 0em;
- padding: 0.5em 0em 0em 0em;
- font-size: 1.5em;
- font-weight: bold;
-}
-
-h3.subtitle {
- margin: 0em 0em 1em 0em;
- padding: 0em 0em 0em 0em;
- font-size: 142.14%;
- text-align: right;
-}
-
-h3 {
- margin: 1em 0em 0.5em 0em;
- padding: 1em 0em 0em 0em;
- font-size: 140%;
- font-weight: bold;
-}
-
-h4 {
- margin: 1em 0em 0.5em 0em;
- padding: 1em 0em 0em 0em;
- font-size: 120%;
- font-weight: bold;
-}
-
-h5 {
- margin: 1em 0em 0.5em 0em;
- padding: 1em 0em 0em 0em;
- font-size: 110%;
- font-weight: bold;
-}
-
-h6 {
- margin: 1em 0em 0em 0em;
- padding: 1em 0em 0em 0em;
- font-size: 110%;
- font-weight: bold;
-}
-
-.authorgroup {
- background-color: transparent;
- background-repeat: no-repeat;
- padding-top: 256px;
- background-image: url("figures/dev-title.png");
- background-position: left top;
- margin-top: -256px;
- padding-right: 50px;
- margin-left: 0px;
- text-align: right;
- width: 740px;
-}
-
-h3.author {
- margin: 0em 0me 0em 0em;
- padding: 0em 0em 0em 0em;
- font-weight: normal;
- font-size: 100%;
- color: #333;
- clear: both;
-}
-
-.author tt.email {
- font-size: 66%;
-}
-
-.titlepage hr {
- width: 0em;
- clear: both;
-}
-
-.revhistory {
- padding-top: 2em;
- clear: both;
-}
-
-.toc,
-.list-of-tables,
-.list-of-examples,
-.list-of-figures {
- padding: 1.33em 0em 2.5em 0em;
- color: #00557D;
-}
-
-.toc p,
-.list-of-tables p,
-.list-of-figures p,
-.list-of-examples p {
- padding: 0em 0em 0em 0em;
- padding: 0em 0em 0.3em;
- margin: 1.5em 0em 0em 0em;
-}
-
-.toc p b,
-.list-of-tables p b,
-.list-of-figures p b,
-.list-of-examples p b{
- font-size: 100.0%;
- font-weight: bold;
-}
-
-.toc dl,
-.list-of-tables dl,
-.list-of-figures dl,
-.list-of-examples dl {
- margin: 0em 0em 0.5em 0em;
- padding: 0em 0em 0em 0em;
-}
-
-.toc dt {
- margin: 0em 0em 0em 0em;
- padding: 0em 0em 0em 0em;
-}
-
-.toc dd {
- margin: 0em 0em 0em 2.6em;
- padding: 0em 0em 0em 0em;
-}
-
-div.glossary dl,
-div.variablelist dl {
-}
-
-.glossary dl dt,
-.variablelist dl dt,
-.variablelist dl dt span.term {
- font-weight: normal;
- width: 20em;
- text-align: right;
-}
-
-.variablelist dl dt {
- margin-top: 0.5em;
-}
-
-.glossary dl dd,
-.variablelist dl dd {
- margin-top: -1em;
- margin-left: 25.5em;
-}
-
-.glossary dd p,
-.variablelist dd p {
- margin-top: 0em;
- margin-bottom: 1em;
-}
-
-
-div.calloutlist table td {
- padding: 0em 0em 0em 0em;
- margin: 0em 0em 0em 0em;
-}
-
-div.calloutlist table td p {
- margin-top: 0em;
- margin-bottom: 1em;
-}
-
-div p.copyright {
- text-align: left;
-}
-
-div.legalnotice p.legalnotice-title {
- margin-bottom: 0em;
-}
-
-p {
- line-height: 1.5em;
- margin-top: 0em;
-
-}
-
-dl {
- padding-top: 0em;
-}
-
-hr {
- border: solid 1px;
-}
-
-
-.mediaobject,
-.mediaobjectco {
- text-align: center;
-}
-
-img {
- border: none;
-}
-
-ul {
- padding: 0em 0em 0em 1.5em;
-}
-
-ul li {
- padding: 0em 0em 0em 0em;
-}
-
-ul li p {
- text-align: left;
-}
-
-table {
- width :100%;
-}
-
-th {
- padding: 0.25em;
- text-align: left;
- font-weight: normal;
- vertical-align: top;
-}
-
-td {
- padding: 0.25em;
- vertical-align: top;
-}
-
-p a[id] {
- margin: 0px;
- padding: 0px;
- display: inline;
- background-image: none;
-}
-
-a {
- text-decoration: underline;
- color: #444;
-}
-
-pre {
- overflow: auto;
-}
-
-a:hover {
- text-decoration: underline;
- /*font-weight: bold;*/
-}
-
-/* This style defines how the permalink character
- appears by itself and when hovered over with
- the mouse. */
-
-[alt='Permalink'] { color: #eee; }
-[alt='Permalink']:hover { color: black; }
-
-
-div.informalfigure,
-div.informalexample,
-div.informaltable,
-div.figure,
-div.table,
-div.example {
- margin: 1em 0em;
- padding: 1em;
- page-break-inside: avoid;
-}
-
-
-div.informalfigure p.title b,
-div.informalexample p.title b,
-div.informaltable p.title b,
-div.figure p.title b,
-div.example p.title b,
-div.table p.title b{
- padding-top: 0em;
- margin-top: 0em;
- font-size: 100%;
- font-weight: normal;
-}
-
-.mediaobject .caption,
-.mediaobject .caption p {
- text-align: center;
- font-size: 80%;
- padding-top: 0.5em;
- padding-bottom: 0.5em;
-}
-
-.epigraph {
- padding-left: 55%;
- margin-bottom: 1em;
-}
-
-.epigraph p {
- text-align: left;
-}
-
-.epigraph .quote {
- font-style: italic;
-}
-.epigraph .attribution {
- font-style: normal;
- text-align: right;
-}
-
-span.application {
- font-style: italic;
-}
-
-.programlisting {
- font-family: monospace;
- font-size: 80%;
- white-space: pre;
- margin: 1.33em 0em;
- padding: 1.33em;
-}
-
-.tip,
-.warning,
-.caution,
-.note {
- margin-top: 1em;
- margin-bottom: 1em;
-
-}
-
-/* force full width of table within div */
-.tip table,
-.warning table,
-.caution table,
-.note table {
- border: none;
- width: 100%;
-}
-
-
-.tip table th,
-.warning table th,
-.caution table th,
-.note table th {
- padding: 0.8em 0.0em 0.0em 0.0em;
- margin : 0em 0em 0em 0em;
-}
-
-.tip p,
-.warning p,
-.caution p,
-.note p {
- margin-top: 0.5em;
- margin-bottom: 0.5em;
- padding-right: 1em;
- text-align: left;
-}
-
-.acronym {
- text-transform: uppercase;
-}
-
-b.keycap,
-.keycap {
- padding: 0.09em 0.3em;
- margin: 0em;
-}
-
-.itemizedlist li {
- clear: none;
-}
-
-.filename {
- font-size: medium;
- font-family: Courier, monospace;
-}
-
-
-div.navheader, div.heading{
- position: absolute;
- left: 0em;
- top: 0em;
- width: 100%;
- background-color: #cdf;
- width: 100%;
-}
-
-div.navfooter, div.footing{
- position: fixed;
- left: 0em;
- bottom: 0em;
- background-color: #eee;
- width: 100%;
-}
-
-
-div.navheader td,
-div.navfooter td {
- font-size: 66%;
-}
-
-div.navheader table th {
- /*font-family: Georgia, Times, serif;*/
- /*font-size: x-large;*/
- font-size: 80%;
-}
-
-div.navheader table {
- border-left: 0em;
- border-right: 0em;
- border-top: 0em;
- width: 100%;
-}
-
-div.navfooter table {
- border-left: 0em;
- border-right: 0em;
- border-bottom: 0em;
- width: 100%;
-}
-
-div.navheader table td a,
-div.navfooter table td a {
- color: #777;
- text-decoration: none;
-}
-
-/* normal text in the footer */
-div.navfooter table td {
- color: black;
-}
-
-div.navheader table td a:visited,
-div.navfooter table td a:visited {
- color: #444;
-}
-
-
-/* links in header and footer */
-div.navheader table td a:hover,
-div.navfooter table td a:hover {
- text-decoration: underline;
- background-color: transparent;
- color: #33a;
-}
-
-div.navheader hr,
-div.navfooter hr {
- display: none;
-}
-
-
-.qandaset tr.question td p {
- margin: 0em 0em 1em 0em;
- padding: 0em 0em 0em 0em;
-}
-
-.qandaset tr.answer td p {
- margin: 0em 0em 1em 0em;
- padding: 0em 0em 0em 0em;
-}
-.answer td {
- padding-bottom: 1.5em;
-}
-
-.emphasis {
- font-weight: bold;
-}
-
-
- /************* /
- / decorations /
-/ *************/
-
-.titlepage {
-}
-
-.part .title {
-}
-
-.subtitle {
- border: none;
-}
-
-/*
-h1 {
- border: none;
-}
-
-h2 {
- border-top: solid 0.2em;
- border-bottom: solid 0.06em;
-}
-
-h3 {
- border-top: 0em;
- border-bottom: solid 0.06em;
-}
-
-h4 {
- border: 0em;
- border-bottom: solid 0.06em;
-}
-
-h5 {
- border: 0em;
-}
-*/
-
-.programlisting {
- border: solid 1px;
-}
-
-div.figure,
-div.table,
-div.informalfigure,
-div.informaltable,
-div.informalexample,
-div.example {
- border: 1px solid;
-}
-
-
-
-.tip,
-.warning,
-.caution,
-.note {
- border: 1px solid;
-}
-
-.tip table th,
-.warning table th,
-.caution table th,
-.note table th {
- border-bottom: 1px solid;
-}
-
-.question td {
- border-top: 1px solid black;
-}
-
-.answer {
-}
-
-
-b.keycap,
-.keycap {
- border: 1px solid;
-}
-
-
-div.navheader, div.heading{
- border-bottom: 1px solid;
-}
-
-
-div.navfooter, div.footing{
- border-top: 1px solid;
-}
-
- /********* /
- / colors /
-/ *********/
-
-body {
- color: #333;
- background: white;
-}
-
-a {
- background: transparent;
-}
-
-a:hover {
- background-color: #dedede;
-}
-
-
-h1,
-h2,
-h3,
-h4,
-h5,
-h6,
-h7,
-h8 {
- background-color: transparent;
-}
-
-hr {
- border-color: #aaa;
-}
-
-
-.tip, .warning, .caution, .note {
- border-color: #fff;
-}
-
-
-.tip table th,
-.warning table th,
-.caution table th,
-.note table th {
- border-bottom-color: #fff;
-}
-
-
-.warning {
- background-color: #f0f0f2;
-}
-
-.caution {
- background-color: #f0f0f2;
-}
-
-.tip {
- background-color: #f0f0f2;
-}
-
-.note {
- background-color: #f0f0f2;
-}
-
-.glossary dl dt,
-.variablelist dl dt,
-.variablelist dl dt span.term {
- color: #044;
-}
-
-div.figure,
-div.table,
-div.example,
-div.informalfigure,
-div.informaltable,
-div.informalexample {
- border-color: #aaa;
-}
-
-pre.programlisting {
- color: black;
- background-color: #fff;
- border-color: #aaa;
- border-width: 2px;
-}
-
-.guimenu,
-.guilabel,
-.guimenuitem {
- background-color: #eee;
-}
-
-
-b.keycap,
-.keycap {
- background-color: #eee;
- border-color: #999;
-}
-
-
-div.navheader {
- border-color: black;
-}
-
-
-div.navfooter {
- border-color: black;
-}
-
-.writernotes {
- color: red;
-}
-
-
- /*********** /
- / graphics /
-/ ***********/
-
-/*
-body {
- background-image: url("images/body_bg.jpg");
- background-attachment: fixed;
-}
-
-.navheader,
-.note,
-.tip {
- background-image: url("images/note_bg.jpg");
- background-attachment: fixed;
-}
-
-.warning,
-.caution {
- background-image: url("images/warning_bg.jpg");
- background-attachment: fixed;
-}
-
-.figure,
-.informalfigure,
-.example,
-.informalexample,
-.table,
-.informaltable {
- background-image: url("images/figure_bg.jpg");
- background-attachment: fixed;
-}
-
-*/
-h1,
-h2,
-h3,
-h4,
-h5,
-h6,
-h7{
-}
-
-/*
-Example of how to stick an image as part of the title.
-
-div.article .titlepage .title
-{
- background-image: url("figures/white-on-black.png");
- background-position: center;
- background-repeat: repeat-x;
-}
-*/
-
-div.preface .titlepage .title,
-div.colophon .title,
-div.chapter .titlepage .title,
-div.article .titlepage .title
-{
-}
-
-div.section div.section .titlepage .title,
-div.sect2 .titlepage .title {
- background: none;
-}
-
-
-h1.title {
- background-color: transparent;
- background-repeat: no-repeat;
- height: 256px;
- text-indent: -9000px;
- overflow:hidden;
-}
-
-h2.subtitle {
- background-color: transparent;
- text-indent: -9000px;
- overflow:hidden;
- width: 0px;
- display: none;
-}
-
- /*************************************** /
- / pippin.gimp.org specific alterations /
-/ ***************************************/
-
-/*
-div.heading, div.navheader {
- color: #777;
- font-size: 80%;
- padding: 0;
- margin: 0;
- text-align: left;
- position: absolute;
- top: 0px;
- left: 0px;
- width: 100%;
- height: 50px;
- background: url('/gfx/heading_bg.png') transparent;
- background-repeat: repeat-x;
- background-attachment: fixed;
- border: none;
-}
-
-div.heading a {
- color: #444;
-}
-
-div.footing, div.navfooter {
- border: none;
- color: #ddd;
- font-size: 80%;
- text-align:right;
-
- width: 100%;
- padding-top: 10px;
- position: absolute;
- bottom: 0px;
- left: 0px;
-
- background: url('/gfx/footing_bg.png') transparent;
-}
-*/
-
-
-
- /****************** /
- / nasty ie tweaks /
-/ ******************/
-
-/*
-div.heading, div.navheader {
- width:expression(document.body.clientWidth + "px");
-}
-
-div.footing, div.navfooter {
- width:expression(document.body.clientWidth + "px");
- margin-left:expression("-5em");
-}
-body {
- padding:expression("4em 5em 0em 5em");
-}
-*/
-
- /**************************************** /
- / mozilla vendor specific css extensions /
-/ ****************************************/
-/*
-div.navfooter, div.footing{
- -moz-opacity: 0.8em;
-}
-
-div.figure,
-div.table,
-div.informalfigure,
-div.informaltable,
-div.informalexample,
-div.example,
-.tip,
-.warning,
-.caution,
-.note {
- -moz-border-radius: 0.5em;
-}
-
-b.keycap,
-.keycap {
- -moz-border-radius: 0.3em;
-}
-*/
-
-table tr td table tr td {
- display: none;
-}
-
-
-hr {
- display: none;
-}
-
-table {
- border: 0em;
-}
-
- .photo {
- float: right;
- margin-left: 1.5em;
- margin-bottom: 1.5em;
- margin-top: 0em;
- max-width: 17em;
- border: 1px solid gray;
- padding: 3px;
- background: white;
-}
- .seperator {
- padding-top: 2em;
- clear: both;
- }
-
- #validators {
- margin-top: 5em;
- text-align: right;
- color: #777;
- }
- @media print {
- body {
- font-size: 8pt;
- }
- .noprint {
- display: none;
- }
- }
-
-
-.tip,
-.note {
- background: #f0f0f2;
- color: #333;
- padding: 20px;
- margin: 20px;
-}
-
-.tip h3,
-.note h3 {
- padding: 0em;
- margin: 0em;
- font-size: 2em;
- font-weight: bold;
- color: #333;
-}
-
-.tip a,
-.note a {
- color: #333;
- text-decoration: underline;
-}
-
-.footnote {
- font-size: small;
- color: #333;
-}
-
-/* Changes the announcement text */
-.tip h3,
-.warning h3,
-.caution h3,
-.note h3 {
- font-size:large;
- color: #00557D;
-}
diff --git a/documentation/dev-manual/development-shell.rst b/documentation/dev-manual/development-shell.rst
new file mode 100644
index 0000000000..be26bcffc7
--- /dev/null
+++ b/documentation/dev-manual/development-shell.rst
@@ -0,0 +1,82 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Using a Development Shell
+*************************
+
+When debugging certain commands or even when just editing packages,
+``devshell`` can be a useful tool. When you invoke ``devshell``, all
+tasks up to and including
+:ref:`ref-tasks-patch` are run for the
+specified target. Then, a new terminal is opened and you are placed in
+``${``\ :term:`S`\ ``}``, the source
+directory. In the new terminal, all the OpenEmbedded build-related
+environment variables are still defined so you can use commands such as
+``configure`` and ``make``. The commands execute just as if the
+OpenEmbedded build system were executing them. Consequently, working
+this way can be helpful when debugging a build or preparing software to
+be used with the OpenEmbedded build system.
+
+Here is an example that uses ``devshell`` on a target named
+``matchbox-desktop``::
+
+ $ bitbake matchbox-desktop -c devshell
+
+This command spawns a terminal with a shell prompt within the
+OpenEmbedded build environment. The
+:term:`OE_TERMINAL` variable
+controls what type of shell is opened.
+
+For spawned terminals, the following occurs:
+
+- The ``PATH`` variable includes the cross-toolchain.
+
+- The ``pkgconfig`` variables find the correct ``.pc`` files.
+
+- The ``configure`` command finds the Yocto Project site files as well
+ as any other necessary files.
+
+Within this environment, you can run configure or compile commands as if
+they were being run by the OpenEmbedded build system itself. As noted
+earlier, the working directory also automatically changes to the Source
+Directory (:term:`S`).
+
+To manually run a specific task using ``devshell``, run the
+corresponding ``run.*`` script in the
+``${``\ :term:`WORKDIR`\ ``}/temp``
+directory (e.g., ``run.do_configure.``\ `pid`). If a task's script does
+not exist, which would be the case if the task was skipped by way of the
+sstate cache, you can create the task by first running it outside of the
+``devshell``::
+
+ $ bitbake -c task
+
+.. note::
+
+ - Execution of a task's ``run.*`` script and BitBake's execution of
+ a task are identical. In other words, running the script re-runs
+ the task just as it would be run using the ``bitbake -c`` command.
+
+ - Any ``run.*`` file that does not have a ``.pid`` extension is a
+ symbolic link (symlink) to the most recent version of that file.
+
+Remember, that the ``devshell`` is a mechanism that allows you to get
+into the BitBake task execution environment. And as such, all commands
+must be called just as BitBake would call them. That means you need to
+provide the appropriate options for cross-compilation and so forth as
+applicable.
+
+When you are finished using ``devshell``, exit the shell or close the
+terminal window.
+
+.. note::
+
+ - It is worth remembering that when using ``devshell`` you need to
+ use the full compiler name such as ``arm-poky-linux-gnueabi-gcc``
+ instead of just using ``gcc``. The same applies to other
+ applications such as ``binutils``, ``libtool`` and so forth.
+ BitBake sets up environment variables such as :term:`CC` to assist
+ applications, such as ``make`` to find the correct tools.
+
+ - It is also worth noting that ``devshell`` still works over X11
+ forwarding and similar situations.
+
diff --git a/documentation/dev-manual/device-manager.rst b/documentation/dev-manual/device-manager.rst
new file mode 100644
index 0000000000..49fc785fec
--- /dev/null
+++ b/documentation/dev-manual/device-manager.rst
@@ -0,0 +1,74 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+.. _device-manager:
+
+Selecting a Device Manager
+**************************
+
+The Yocto Project provides multiple ways to manage the device manager
+(``/dev``):
+
+- Persistent and Pre-Populated ``/dev``: For this case, the ``/dev``
+ directory is persistent and the required device nodes are created
+ during the build.
+
+- Use ``devtmpfs`` with a Device Manager: For this case, the ``/dev``
+ directory is provided by the kernel as an in-memory file system and
+ is automatically populated by the kernel at runtime. Additional
+ configuration of device nodes is done in user space by a device
+ manager like ``udev`` or ``busybox-mdev``.
+
+Using Persistent and Pre-Populated ``/dev``
+===========================================
+
+To use the static method for device population, you need to set the
+:term:`USE_DEVFS` variable to "0"
+as follows::
+
+ USE_DEVFS = "0"
+
+The content of the resulting ``/dev`` directory is defined in a Device
+Table file. The
+:term:`IMAGE_DEVICE_TABLES`
+variable defines the Device Table to use and should be set in the
+machine or distro configuration file. Alternatively, you can set this
+variable in your ``local.conf`` configuration file.
+
+If you do not define the :term:`IMAGE_DEVICE_TABLES` variable, the default
+``device_table-minimal.txt`` is used::
+
+ IMAGE_DEVICE_TABLES = "device_table-mymachine.txt"
+
+The population is handled by the ``makedevs`` utility during image
+creation:
+
+Using ``devtmpfs`` and a Device Manager
+=======================================
+
+To use the dynamic method for device population, you need to use (or be
+sure to set) the :term:`USE_DEVFS`
+variable to "1", which is the default::
+
+ USE_DEVFS = "1"
+
+With this
+setting, the resulting ``/dev`` directory is populated by the kernel
+using ``devtmpfs``. Make sure the corresponding kernel configuration
+variable ``CONFIG_DEVTMPFS`` is set when building you build a Linux
+kernel.
+
+All devices created by ``devtmpfs`` will be owned by ``root`` and have
+permissions ``0600``.
+
+To have more control over the device nodes, you can use a device manager like
+``udev`` or ``busybox-mdev``. You choose the device manager by defining the
+:term:`VIRTUAL-RUNTIME_dev_manager <VIRTUAL-RUNTIME>` variable in your machine
+or distro configuration file. Alternatively, you can set this variable in
+your ``local.conf`` configuration file::
+
+ VIRTUAL-RUNTIME_dev_manager = "udev"
+
+ # Some alternative values
+ # VIRTUAL-RUNTIME_dev_manager = "busybox-mdev"
+ # VIRTUAL-RUNTIME_dev_manager = "systemd"
+
diff --git a/documentation/dev-manual/disk-space.rst b/documentation/dev-manual/disk-space.rst
new file mode 100644
index 0000000000..efca82601d
--- /dev/null
+++ b/documentation/dev-manual/disk-space.rst
@@ -0,0 +1,61 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Conserving Disk Space
+*********************
+
+Conserving Disk Space During Builds
+===================================
+
+To help conserve disk space during builds, you can add the following
+statement to your project's ``local.conf`` configuration file found in
+the :term:`Build Directory`::
+
+ INHERIT += "rm_work"
+
+Adding this statement deletes the work directory used for
+building a recipe once the recipe is built. For more information on
+"rm_work", see the :ref:`ref-classes-rm-work` class in the
+Yocto Project Reference Manual.
+
+When you inherit this class and build a ``core-image-sato`` image for a
+``qemux86-64`` machine from an Ubuntu 22.04 x86-64 system, you end up with a
+final disk usage of 22 Gbytes instead of &MIN_DISK_SPACE; Gbytes. However,
+&MIN_DISK_SPACE_RM_WORK; Gbytes of initial free disk space are still needed to
+create temporary files before they can be deleted.
+
+Purging Obsolete Shared State Cache Files
+=========================================
+
+After multiple build iterations, the Shared State (sstate) cache can contain
+multiple cache files for a given package, consuming a substantial amount of
+disk space. However, only the most recent ones are likely to be reused.
+
+The following command is a quick way to purge all the cache files which
+haven't been used for a least a specified number of days::
+
+ find build/sstate-cache -type f -mtime +$DAYS -delete
+
+The above command relies on the fact that BitBake touches the sstate cache
+files as it accesses them, when it has write access to the cache.
+
+You could use ``-atime`` instead of ``-mtime`` if the partition isn't mounted
+with the ``noatime`` option for a read only cache.
+
+For more advanced needs, OpenEmbedded-Core also offers a more elaborate
+command. It has the ability to purge all but the newest cache files on each
+architecture, and also to remove files that it considers unreachable by
+exploring a set of build configurations. However, this command
+requires a full build environment to be available and doesn't work well
+covering multiple releases. It won't work either on limited environments
+such as BSD based NAS::
+
+ sstate-cache-management.py --remove-duplicated --cache-dir=sstate-cache
+
+This command will ask you to confirm the deletions it identifies.
+Run ``sstate-cache-management.sh`` for more details about this script.
+
+.. note::
+
+ As this command is much more cautious and selective, removing only cache files,
+ it will execute much slower than the simple ``find`` command described above.
+ Therefore, it may not be your best option to trim huge cache directories.
diff --git a/documentation/dev-manual/efficiently-fetching-sources.rst b/documentation/dev-manual/efficiently-fetching-sources.rst
new file mode 100644
index 0000000000..a15f0a92ce
--- /dev/null
+++ b/documentation/dev-manual/efficiently-fetching-sources.rst
@@ -0,0 +1,68 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Efficiently Fetching Source Files During a Build
+************************************************
+
+The OpenEmbedded build system works with source files located through
+the :term:`SRC_URI` variable. When
+you build something using BitBake, a big part of the operation is
+locating and downloading all the source tarballs. For images,
+downloading all the source for various packages can take a significant
+amount of time.
+
+This section shows you how you can use mirrors to speed up fetching
+source files and how you can pre-fetch files all of which leads to more
+efficient use of resources and time.
+
+Setting up Effective Mirrors
+============================
+
+A good deal that goes into a Yocto Project build is simply downloading
+all of the source tarballs. Maybe you have been working with another
+build system for which you have built up a
+sizable directory of source tarballs. Or, perhaps someone else has such
+a directory for which you have read access. If so, you can save time by
+adding statements to your configuration file so that the build process
+checks local directories first for existing tarballs before checking the
+Internet.
+
+Here is an efficient way to set it up in your ``local.conf`` file::
+
+ SOURCE_MIRROR_URL ?= "file:///home/you/your-download-dir/"
+ INHERIT += "own-mirrors"
+ BB_GENERATE_MIRROR_TARBALLS = "1"
+ # BB_NO_NETWORK = "1"
+
+In the previous example, the
+:term:`BB_GENERATE_MIRROR_TARBALLS`
+variable causes the OpenEmbedded build system to generate tarballs of
+the Git repositories and store them in the
+:term:`DL_DIR` directory. Due to
+performance reasons, generating and storing these tarballs is not the
+build system's default behavior.
+
+You can also use the
+:term:`PREMIRRORS` variable. For
+an example, see the variable's glossary entry in the Yocto Project
+Reference Manual.
+
+Getting Source Files and Suppressing the Build
+==============================================
+
+Another technique you can use to ready yourself for a successive string
+of build operations, is to pre-fetch all the source files without
+actually starting a build. This technique lets you work through any
+download issues and ultimately gathers all the source files into your
+download directory :ref:`structure-build-downloads`,
+which is located with :term:`DL_DIR`.
+
+Use the following BitBake command form to fetch all the necessary
+sources without starting the build::
+
+ $ bitbake target --runall=fetch
+
+This
+variation of the BitBake command guarantees that you have all the
+sources for that BitBake target should you disconnect from the Internet
+and want to do the build later offline.
+
diff --git a/documentation/dev-manual/error-reporting-tool.rst b/documentation/dev-manual/error-reporting-tool.rst
new file mode 100644
index 0000000000..84f3d9cd1e
--- /dev/null
+++ b/documentation/dev-manual/error-reporting-tool.rst
@@ -0,0 +1,84 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Using the Error Reporting Tool
+******************************
+
+The error reporting tool allows you to submit errors encountered during
+builds to a central database. Outside of the build environment, you can
+use a web interface to browse errors, view statistics, and query for
+errors. The tool works using a client-server system where the client
+portion is integrated with the installed Yocto Project
+:term:`Source Directory` (e.g. ``poky``).
+The server receives the information collected and saves it in a
+database.
+
+There is a live instance of the error reporting server at
+https://errors.yoctoproject.org.
+When you want to get help with build failures, you can submit all of the
+information on the failure easily and then point to the URL in your bug
+report or send an email to the mailing list.
+
+.. note::
+
+ If you send error reports to this server, the reports become publicly
+ visible.
+
+Enabling and Using the Tool
+===========================
+
+By default, the error reporting tool is disabled. You can enable it by
+inheriting the :ref:`ref-classes-report-error` class by adding the
+following statement to the end of your ``local.conf`` file in your
+:term:`Build Directory`::
+
+ INHERIT += "report-error"
+
+By default, the error reporting feature stores information in
+``${``\ :term:`LOG_DIR`\ ``}/error-report``.
+However, you can specify a directory to use by adding the following to
+your ``local.conf`` file::
+
+ ERR_REPORT_DIR = "path"
+
+Enabling error
+reporting causes the build process to collect the errors and store them
+in a file as previously described. When the build system encounters an
+error, it includes a command as part of the console output. You can run
+the command to send the error file to the server. For example, the
+following command sends the errors to an upstream server::
+
+ $ send-error-report /home/brandusa/project/poky/build/tmp/log/error-report/error_report_201403141617.txt
+
+In the previous example, the errors are sent to a public database
+available at https://errors.yoctoproject.org, which is used by the
+entire community. If you specify a particular server, you can send the
+errors to a different database. Use the following command for more
+information on available options::
+
+ $ send-error-report --help
+
+When sending the error file, you are prompted to review the data being
+sent as well as to provide a name and optional email address. Once you
+satisfy these prompts, the command returns a link from the server that
+corresponds to your entry in the database. For example, here is a
+typical link: https://errors.yoctoproject.org/Errors/Details/9522/
+
+Following the link takes you to a web interface where you can browse,
+query the errors, and view statistics.
+
+Disabling the Tool
+==================
+
+To disable the error reporting feature, simply remove or comment out the
+following statement from the end of your ``local.conf`` file in your
+:term:`Build Directory`::
+
+ INHERIT += "report-error"
+
+Setting Up Your Own Error Reporting Server
+==========================================
+
+If you want to set up your own error reporting server, you can obtain
+the code from the Git repository at :yocto_git:`/error-report-web/`.
+Instructions on how to set it up are in the README document.
+
diff --git a/documentation/dev-manual/external-scm.rst b/documentation/dev-manual/external-scm.rst
new file mode 100644
index 0000000000..97a7e63e36
--- /dev/null
+++ b/documentation/dev-manual/external-scm.rst
@@ -0,0 +1,67 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Using an External SCM
+*********************
+
+If you're working on a recipe that pulls from an external Source Code
+Manager (SCM), it is possible to have the OpenEmbedded build system
+notice new recipe changes added to the SCM and then build the resulting
+packages that depend on the new recipes by using the latest versions.
+This only works for SCMs from which it is possible to get a sensible
+revision number for changes. Currently, you can do this with Apache
+Subversion (SVN), Git, and Bazaar (BZR) repositories.
+
+To enable this behavior, the :term:`PV` of
+the recipe needs to reference
+:term:`SRCPV`. Here is an example::
+
+ PV = "1.2.3+git${SRCPV}"
+
+Then, you can add the following to your
+``local.conf``::
+
+ SRCREV:pn-PN = "${AUTOREV}"
+
+:term:`PN` is the name of the recipe for
+which you want to enable automatic source revision updating.
+
+If you do not want to update your local configuration file, you can add
+the following directly to the recipe to finish enabling the feature::
+
+ SRCREV = "${AUTOREV}"
+
+The Yocto Project provides a distribution named ``poky-bleeding``, whose
+configuration file contains the line::
+
+ require conf/distro/include/poky-floating-revisions.inc
+
+This line pulls in the
+listed include file that contains numerous lines of exactly that form::
+
+ #SRCREV:pn-opkg-native ?= "${AUTOREV}"
+ #SRCREV:pn-opkg-sdk ?= "${AUTOREV}"
+ #SRCREV:pn-opkg ?= "${AUTOREV}"
+ #SRCREV:pn-opkg-utils-native ?= "${AUTOREV}"
+ #SRCREV:pn-opkg-utils ?= "${AUTOREV}"
+ SRCREV:pn-gconf-dbus ?= "${AUTOREV}"
+ SRCREV:pn-matchbox-common ?= "${AUTOREV}"
+ SRCREV:pn-matchbox-config-gtk ?= "${AUTOREV}"
+ SRCREV:pn-matchbox-desktop ?= "${AUTOREV}"
+ SRCREV:pn-matchbox-keyboard ?= "${AUTOREV}"
+ SRCREV:pn-matchbox-panel-2 ?= "${AUTOREV}"
+ SRCREV:pn-matchbox-themes-extra ?= "${AUTOREV}"
+ SRCREV:pn-matchbox-terminal ?= "${AUTOREV}"
+ SRCREV:pn-matchbox-wm ?= "${AUTOREV}"
+ SRCREV:pn-settings-daemon ?= "${AUTOREV}"
+ SRCREV:pn-screenshot ?= "${AUTOREV}"
+ . . .
+
+These lines allow you to
+experiment with building a distribution that tracks the latest
+development source for numerous packages.
+
+.. note::
+
+ The ``poky-bleeding`` distribution is not tested on a regular basis. Keep
+ this in mind if you use it.
+
diff --git a/documentation/dev-manual/external-toolchain.rst b/documentation/dev-manual/external-toolchain.rst
new file mode 100644
index 0000000000..238f8cf467
--- /dev/null
+++ b/documentation/dev-manual/external-toolchain.rst
@@ -0,0 +1,40 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Optionally Using an External Toolchain
+**************************************
+
+You might want to use an external toolchain as part of your development.
+If this is the case, the fundamental steps you need to accomplish are as
+follows:
+
+- Understand where the installed toolchain resides. For cases where you
+ need to build the external toolchain, you would need to take separate
+ steps to build and install the toolchain.
+
+- Make sure you add the layer that contains the toolchain to your
+ ``bblayers.conf`` file through the
+ :term:`BBLAYERS` variable.
+
+- Set the :term:`EXTERNAL_TOOLCHAIN` variable in your ``local.conf`` file
+ to the location in which you installed the toolchain.
+
+The toolchain configuration is very flexible and customizable. It
+is primarily controlled with the :term:`TCMODE` variable. This variable
+controls which ``tcmode-*.inc`` file to include from the
+``meta/conf/distro/include`` directory within the :term:`Source Directory`.
+
+The default value of :term:`TCMODE` is "default", which tells the
+OpenEmbedded build system to use its internally built toolchain (i.e.
+``tcmode-default.inc``). However, other patterns are accepted. In
+particular, "external-\*" refers to external toolchains. One example is
+the Mentor Graphics Sourcery G++ Toolchain. Support for this toolchain resides
+in the separate ``meta-sourcery`` layer at
+https://github.com/MentorEmbedded/meta-sourcery/.
+See its ``README`` file for details about how to use this layer.
+
+Another example of external toolchain layer is
+:yocto_git:`meta-arm-toolchain </meta-arm/tree/meta-arm-toolchain/>`
+supporting GNU toolchains released by ARM.
+
+You can find further information by reading about the :term:`TCMODE` variable
+in the Yocto Project Reference Manual's variable glossary.
diff --git a/documentation/dev-manual/figures/cute-files-npm-example.png b/documentation/dev-manual/figures/cute-files-npm-example.png
index 1ebe74f535..a02cca097f 100644
--- a/documentation/dev-manual/figures/cute-files-npm-example.png
+++ b/documentation/dev-manual/figures/cute-files-npm-example.png
Binary files differ
diff --git a/documentation/dev-manual/gobject-introspection.rst b/documentation/dev-manual/gobject-introspection.rst
new file mode 100644
index 0000000000..f7206e6fae
--- /dev/null
+++ b/documentation/dev-manual/gobject-introspection.rst
@@ -0,0 +1,155 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Enabling GObject Introspection Support
+**************************************
+
+`GObject introspection <https://gi.readthedocs.io/en/latest/>`__
+is the standard mechanism for accessing GObject-based software from
+runtime environments. GObject is a feature of the GLib library that
+provides an object framework for the GNOME desktop and related software.
+GObject Introspection adds information to GObject that allows objects
+created within it to be represented across different programming
+languages. If you want to construct GStreamer pipelines using Python, or
+control UPnP infrastructure using Javascript and GUPnP, GObject
+introspection is the only way to do it.
+
+This section describes the Yocto Project support for generating and
+packaging GObject introspection data. GObject introspection data is a
+description of the API provided by libraries built on top of the GLib
+framework, and, in particular, that framework's GObject mechanism.
+GObject Introspection Repository (GIR) files go to ``-dev`` packages,
+``typelib`` files go to main packages as they are packaged together with
+libraries that are introspected.
+
+The data is generated when building such a library, by linking the
+library with a small executable binary that asks the library to describe
+itself, and then executing the binary and processing its output.
+
+Generating this data in a cross-compilation environment is difficult
+because the library is produced for the target architecture, but its
+code needs to be executed on the build host. This problem is solved with
+the OpenEmbedded build system by running the code through QEMU, which
+allows precisely that. Unfortunately, QEMU does not always work
+perfectly as mentioned in the ":ref:`dev-manual/gobject-introspection:known issues`"
+section.
+
+Enabling the Generation of Introspection Data
+=============================================
+
+Enabling the generation of introspection data (GIR files) in your
+library package involves the following:
+
+#. Inherit the :ref:`ref-classes-gobject-introspection` class.
+
+#. Make sure introspection is not disabled anywhere in the recipe or
+ from anything the recipe includes. Also, make sure that
+ "gobject-introspection-data" is not in
+ :term:`DISTRO_FEATURES_BACKFILL_CONSIDERED`
+ and that "qemu-usermode" is not in
+ :term:`MACHINE_FEATURES_BACKFILL_CONSIDERED`.
+ In either of these conditions, nothing will happen.
+
+#. Try to build the recipe. If you encounter build errors that look like
+ something is unable to find ``.so`` libraries, check where these
+ libraries are located in the source tree and add the following to the
+ recipe::
+
+ GIR_EXTRA_LIBS_PATH = "${B}/something/.libs"
+
+ .. note::
+
+ See recipes in the ``oe-core`` repository that use that
+ :term:`GIR_EXTRA_LIBS_PATH` variable as an example.
+
+#. Look for any other errors, which probably mean that introspection
+ support in a package is not entirely standard, and thus breaks down
+ in a cross-compilation environment. For such cases, custom-made fixes
+ are needed. A good place to ask and receive help in these cases is
+ the :ref:`Yocto Project mailing
+ lists <resources-mailinglist>`.
+
+.. note::
+
+ Using a library that no longer builds against the latest Yocto
+ Project release and prints introspection related errors is a good
+ candidate for the previous procedure.
+
+Disabling the Generation of Introspection Data
+==============================================
+
+You might find that you do not want to generate introspection data. Or,
+perhaps QEMU does not work on your build host and target architecture
+combination. If so, you can use either of the following methods to
+disable GIR file generations:
+
+- Add the following to your distro configuration::
+
+ DISTRO_FEATURES_BACKFILL_CONSIDERED = "gobject-introspection-data"
+
+ Adding this statement disables generating introspection data using
+ QEMU but will still enable building introspection tools and libraries
+ (i.e. building them does not require the use of QEMU).
+
+- Add the following to your machine configuration::
+
+ MACHINE_FEATURES_BACKFILL_CONSIDERED = "qemu-usermode"
+
+ Adding this statement disables the use of QEMU when building packages for your
+ machine. Currently, this feature is used only by introspection
+ recipes and has the same effect as the previously described option.
+
+ .. note::
+
+ Future releases of the Yocto Project might have other features
+ affected by this option.
+
+If you disable introspection data, you can still obtain it through other
+means such as copying the data from a suitable sysroot, or by generating
+it on the target hardware. The OpenEmbedded build system does not
+currently provide specific support for these techniques.
+
+Testing that Introspection Works in an Image
+============================================
+
+Use the following procedure to test if generating introspection data is
+working in an image:
+
+#. Make sure that "gobject-introspection-data" is not in
+ :term:`DISTRO_FEATURES_BACKFILL_CONSIDERED`
+ and that "qemu-usermode" is not in
+ :term:`MACHINE_FEATURES_BACKFILL_CONSIDERED`.
+
+#. Build ``core-image-sato``.
+
+#. Launch a Terminal and then start Python in the terminal.
+
+#. Enter the following in the terminal::
+
+ >>> from gi.repository import GLib
+ >>> GLib.get_host_name()
+
+#. For something a little more advanced, enter the following see:
+ https://python-gtk-3-tutorial.readthedocs.io/en/latest/introduction.html
+
+Known Issues
+============
+
+Here are know issues in GObject Introspection Support:
+
+- ``qemu-ppc64`` immediately crashes. Consequently, you cannot build
+ introspection data on that architecture.
+
+- x32 is not supported by QEMU. Consequently, introspection data is
+ disabled.
+
+- musl causes transient GLib binaries to crash on assertion failures.
+ Consequently, generating introspection data is disabled.
+
+- Because QEMU is not able to run the binaries correctly, introspection
+ is disabled for some specific packages under specific architectures
+ (e.g. ``gcr``, ``libsecret``, and ``webkit``).
+
+- QEMU usermode might not work properly when running 64-bit binaries
+ under 32-bit host machines. In particular, "qemumips64" is known to
+ not work under i686.
+
diff --git a/documentation/dev-manual/index.rst b/documentation/dev-manual/index.rst
new file mode 100644
index 0000000000..9ccf60f701
--- /dev/null
+++ b/documentation/dev-manual/index.rst
@@ -0,0 +1,52 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+======================================
+Yocto Project Development Tasks Manual
+======================================
+
+.. toctree::
+ :caption: Table of Contents
+ :numbered:
+
+ intro
+ start
+ layers
+ customizing-images
+ new-recipe
+ new-machine
+ upgrading-recipes
+ temporary-source-code
+ quilt.rst
+ development-shell
+ python-development-shell
+ building
+ speeding-up-build
+ libraries
+ prebuilt-libraries
+ x32-psabi
+ gobject-introspection
+ external-toolchain
+ wic
+ bmaptool
+ securing-images
+ custom-distribution
+ custom-template-configuration-directory
+ disk-space
+ packages
+ efficiently-fetching-sources
+ init-manager
+ device-manager
+ external-scm
+ read-only-rootfs
+ build-quality
+ runtime-testing
+ debugging
+ licenses
+ security-subjects
+ vulnerabilities
+ sbom
+ error-reporting-tool
+ wayland
+ qemu
+
+.. include:: /boilerplate.rst
diff --git a/documentation/dev-manual/init-manager.rst b/documentation/dev-manual/init-manager.rst
new file mode 100644
index 0000000000..ddce82b81f
--- /dev/null
+++ b/documentation/dev-manual/init-manager.rst
@@ -0,0 +1,162 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+.. _init-manager:
+
+Selecting an Initialization Manager
+***********************************
+
+By default, the Yocto Project uses :wikipedia:`SysVinit <Init#SysV-style>` as
+the initialization manager. There is also support for BusyBox init, a simpler
+implementation, as well as support for :wikipedia:`systemd <Systemd>`, which
+is a full replacement for init with parallel starting of services, reduced
+shell overhead, increased security and resource limits for services, and other
+features that are used by many distributions.
+
+Within the system, SysVinit and BusyBox init treat system components as
+services. These services are maintained as shell scripts stored in the
+``/etc/init.d/`` directory.
+
+SysVinit is more elaborate than BusyBox init and organizes services in
+different run levels. This organization is maintained by putting links
+to the services in the ``/etc/rcN.d/`` directories, where `N/` is one
+of the following options: "S", "0", "1", "2", "3", "4", "5", or "6".
+
+.. note::
+
+ Each runlevel has a dependency on the previous runlevel. This
+ dependency allows the services to work properly.
+
+Both SysVinit and BusyBox init are configured through the ``/etc/inittab``
+file, with a very similar syntax, though of course BusyBox init features
+are more limited.
+
+In comparison, systemd treats components as units. Using units is a
+broader concept as compared to using a service. A unit includes several
+different types of entities. ``Service`` is one of the types of entities.
+The runlevel concept in SysVinit corresponds to the concept of a target
+in systemd, where target is also a type of supported unit.
+
+In systems with SysVinit or BusyBox init, services load sequentially (i.e. one
+by one) during init and parallelization is not supported. With systemd, services
+start in parallel. This method can have an impact on the startup performance
+of a given service, though systemd will also provide more services by default,
+therefore increasing the total system boot time. systemd also substantially
+increases system size because of its multiple components and the extra
+dependencies it pulls.
+
+On the contrary, BusyBox init is the simplest and the lightest solution and
+also comes with BusyBox mdev as device manager, a lighter replacement to
+:wikipedia:`udev <Udev>`, which SysVinit and systemd both use.
+
+The ":ref:`device-manager`" chapter has more details about device managers.
+
+Using SysVinit with udev
+=========================
+
+SysVinit with the udev device manager corresponds to the
+default setting in Poky. This corresponds to setting::
+
+ INIT_MANAGER = "sysvinit"
+
+Using BusyBox init with BusyBox mdev
+====================================
+
+BusyBox init with BusyBox mdev is the simplest and lightest solution
+for small root filesystems. All you need is BusyBox, which most systems
+have anyway::
+
+ INIT_MANAGER = "mdev-busybox"
+
+Using systemd
+=============
+
+The last option is to use systemd together with the udev device
+manager. This is the most powerful and versatile solution, especially
+for more complex systems::
+
+ INIT_MANAGER = "systemd"
+
+This will enable systemd and remove sysvinit components from the image.
+See :yocto_git:`meta/conf/distro/include/init-manager-systemd.inc
+</poky/tree/meta/conf/distro/include/init-manager-systemd.inc>` for exact
+details on what this does.
+
+Controling systemd from the target command line
+-----------------------------------------------
+
+Here is a quick reference for controling systemd from the command line on the
+target. Instead of opening and sometimes modifying files, most interaction
+happens through the ``systemctl`` and ``journalctl`` commands:
+
+- ``systemctl status``: show the status of all services
+- ``systemctl status <service>``: show the status of one service
+- ``systemctl [start|stop] <service>``: start or stop a service
+- ``systemctl [enable|disable] <service>``: enable or disable a service at boot time
+- ``systemctl list-units``: list all available units
+- ``journalctl -a``: show all logs for all services
+- ``journalctl -f``: show only the last log entries, and keep printing updates as they arrive
+- ``journalctl -u``: show only logs from a particular service
+
+Using systemd-journald without a traditional syslog daemon
+----------------------------------------------------------
+
+Counter-intuitively, ``systemd-journald`` is not a syslog runtime or provider,
+and the proper way to use ``systemd-journald`` as your sole logging mechanism is to
+effectively disable syslog entirely by setting these variables in your distribution
+configuration file::
+
+ VIRTUAL-RUNTIME_syslog = ""
+ VIRTUAL-RUNTIME_base-utils-syslog = ""
+
+Doing so will prevent ``rsyslog`` / ``busybox-syslog`` from being pulled in by
+default, leaving only ``systemd-journald``.
+
+Summary
+-------
+
+The Yocto Project supports three different initialization managers, offering
+increasing levels of complexity and functionality:
+
+.. list-table::
+ :widths: 40 20 20 20
+ :header-rows: 1
+
+ * -
+ - BusyBox init
+ - SysVinit
+ - systemd
+ * - Size
+ - Small
+ - Small
+ - Big [#footnote-systemd-size]_
+ * - Complexity
+ - Small
+ - Medium
+ - High
+ * - Support for boot profiles
+ - No
+ - Yes ("runlevels")
+ - Yes ("targets")
+ * - Services defined as
+ - Shell scripts
+ - Shell scripts
+ - Description files
+ * - Starting services in parallel
+ - No
+ - No
+ - Yes
+ * - Setting service resource limits
+ - No
+ - No
+ - Yes
+ * - Support service isolation
+ - No
+ - No
+ - Yes
+ * - Integrated logging
+ - No
+ - No
+ - Yes
+
+.. [#footnote-systemd-size] Using systemd increases the ``core-image-minimal``
+ image size by 160\% for ``qemux86-64`` on Mickledore (4.2), compared to SysVinit.
diff --git a/documentation/dev-manual/intro.rst b/documentation/dev-manual/intro.rst
new file mode 100644
index 0000000000..0f7370a96d
--- /dev/null
+++ b/documentation/dev-manual/intro.rst
@@ -0,0 +1,59 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+******************************************
+The Yocto Project Development Tasks Manual
+******************************************
+
+Welcome
+=======
+
+Welcome to the Yocto Project Development Tasks Manual. This manual
+provides relevant procedures necessary for developing in the Yocto
+Project environment (i.e. developing embedded Linux images and
+user-space applications that run on targeted devices). This manual groups
+related procedures into higher-level sections. Procedures can consist of
+high-level steps or low-level steps depending on the topic.
+
+This manual provides the following:
+
+- Procedures that help you get going with the Yocto Project; for
+ example, procedures that show you how to set up a build host and work
+ with the Yocto Project source repositories.
+
+- Procedures that show you how to submit changes to the Yocto Project.
+ Changes can be improvements, new features, or bug fixes.
+
+- Procedures related to "everyday" tasks you perform while developing
+ images and applications using the Yocto Project, such as
+ creating a new layer, customizing an image, writing a new recipe,
+ and so forth.
+
+This manual does not provide the following:
+
+- Redundant step-by-step instructions: For example, the
+ :doc:`/sdk-manual/index` manual contains detailed
+ instructions on how to install an SDK, which is used to develop
+ applications for target hardware.
+
+- Reference or conceptual material: This type of material resides in an
+ appropriate reference manual. As an example, system variables are
+ documented in the :doc:`/ref-manual/index`.
+
+- Detailed public information not specific to the Yocto Project: For
+ example, exhaustive information on how to use the Git version
+ control system is better covered with Internet searches and official Git
+ documentation than through the Yocto Project documentation.
+
+Other Information
+=================
+
+Because this manual presents information for many different topics,
+supplemental information is recommended for full comprehension. For
+introductory information on the Yocto Project, see the
+:yocto_home:`Yocto Project Website <>`. If you want to build an image with no
+knowledge of Yocto Project as a way of quickly testing it out, see the
+:doc:`/brief-yoctoprojectqs/index` document.
+
+For a comprehensive list of links and other documentation, see the
+":ref:`ref-manual/resources:links and related documentation`"
+section in the Yocto Project Reference Manual.
diff --git a/documentation/dev-manual/layers.rst b/documentation/dev-manual/layers.rst
new file mode 100644
index 0000000000..91889bd0ae
--- /dev/null
+++ b/documentation/dev-manual/layers.rst
@@ -0,0 +1,919 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Understanding and Creating Layers
+*********************************
+
+The OpenEmbedded build system supports organizing
+:term:`Metadata` into multiple layers.
+Layers allow you to isolate different types of customizations from each
+other. For introductory information on the Yocto Project Layer Model,
+see the
+":ref:`overview-manual/yp-intro:the yocto project layer model`"
+section in the Yocto Project Overview and Concepts Manual.
+
+Creating Your Own Layer
+=======================
+
+.. note::
+
+ It is very easy to create your own layers to use with the OpenEmbedded
+ build system, as the Yocto Project ships with tools that speed up creating
+ layers. This section describes the steps you perform by hand to create
+ layers so that you can better understand them. For information about the
+ layer-creation tools, see the
+ ":ref:`bsp-guide/bsp:creating a new bsp layer using the \`\`bitbake-layers\`\` script`"
+ section in the Yocto Project Board Support Package (BSP) Developer's
+ Guide and the ":ref:`dev-manual/layers:creating a general layer using the \`\`bitbake-layers\`\` script`"
+ section further down in this manual.
+
+Follow these general steps to create your layer without using tools:
+
+#. *Check Existing Layers:* Before creating a new layer, you should be
+ sure someone has not already created a layer containing the Metadata
+ you need. You can see the :oe_layerindex:`OpenEmbedded Metadata Index <>`
+ for a list of layers from the OpenEmbedded community that can be used in
+ the Yocto Project. You could find a layer that is identical or close
+ to what you need.
+
+#. *Create a Directory:* Create the directory for your layer. When you
+ create the layer, be sure to create the directory in an area not
+ associated with the Yocto Project :term:`Source Directory`
+ (e.g. the cloned ``poky`` repository).
+
+ While not strictly required, prepend the name of the directory with
+ the string "meta-". For example::
+
+ meta-mylayer
+ meta-GUI_xyz
+ meta-mymachine
+
+ With rare exceptions, a layer's name follows this form::
+
+ meta-root_name
+
+ Following this layer naming convention can save
+ you trouble later when tools, components, or variables "assume" your
+ layer name begins with "meta-". A notable example is in configuration
+ files as shown in the following step where layer names without the
+ "meta-" string are appended to several variables used in the
+ configuration.
+
+#. *Create a Layer Configuration File:* Inside your new layer folder,
+ you need to create a ``conf/layer.conf`` file. It is easiest to take
+ an existing layer configuration file and copy that to your layer's
+ ``conf`` directory and then modify the file as needed.
+
+ The ``meta-yocto-bsp/conf/layer.conf`` file in the Yocto Project
+ :yocto_git:`Source Repositories </poky/tree/meta-yocto-bsp/conf>`
+ demonstrates the required syntax. For your layer, you need to replace
+ "yoctobsp" with a unique identifier for your layer (e.g. "machinexyz"
+ for a layer named "meta-machinexyz")::
+
+ # We have a conf and classes directory, add to BBPATH
+ BBPATH .= ":${LAYERDIR}"
+
+ # We have recipes-* directories, add to BBFILES
+ BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
+ ${LAYERDIR}/recipes-*/*/*.bbappend"
+
+ BBFILE_COLLECTIONS += "yoctobsp"
+ BBFILE_PATTERN_yoctobsp = "^${LAYERDIR}/"
+ BBFILE_PRIORITY_yoctobsp = "5"
+ LAYERVERSION_yoctobsp = "4"
+ LAYERSERIES_COMPAT_yoctobsp = "dunfell"
+
+ Here is an explanation of the layer configuration file:
+
+ - :term:`BBPATH`: Adds the layer's
+ root directory to BitBake's search path. Through the use of the
+ :term:`BBPATH` variable, BitBake locates class files (``.bbclass``),
+ configuration files, and files that are included with ``include``
+ and ``require`` statements. For these cases, BitBake uses the
+ first file that matches the name found in :term:`BBPATH`. This is
+ similar to the way the ``PATH`` variable is used for binaries. It
+ is recommended, therefore, that you use unique class and
+ configuration filenames in your custom layer.
+
+ - :term:`BBFILES`: Defines the
+ location for all recipes in the layer.
+
+ - :term:`BBFILE_COLLECTIONS`:
+ Establishes the current layer through a unique identifier that is
+ used throughout the OpenEmbedded build system to refer to the
+ layer. In this example, the identifier "yoctobsp" is the
+ representation for the container layer named "meta-yocto-bsp".
+
+ - :term:`BBFILE_PATTERN`:
+ Expands immediately during parsing to provide the directory of the
+ layer.
+
+ - :term:`BBFILE_PRIORITY`:
+ Establishes a priority to use for recipes in the layer when the
+ OpenEmbedded build finds recipes of the same name in different
+ layers.
+
+ - :term:`LAYERVERSION`:
+ Establishes a version number for the layer. You can use this
+ version number to specify this exact version of the layer as a
+ dependency when using the
+ :term:`LAYERDEPENDS`
+ variable.
+
+ - :term:`LAYERDEPENDS`:
+ Lists all layers on which this layer depends (if any).
+
+ - :term:`LAYERSERIES_COMPAT`:
+ Lists the :yocto_wiki:`Yocto Project </Releases>`
+ releases for which the current version is compatible. This
+ variable is a good way to indicate if your particular layer is
+ current.
+
+
+ .. note::
+
+ A layer does not have to contain only recipes ``.bb`` or append files
+ ``.bbappend``. Generally, developers create layers using
+ ``bitbake-layers create-layer``.
+ See ":ref:`dev-manual/layers:creating a general layer using the \`\`bitbake-layers\`\` script`",
+ explaining how the ``layer.conf`` file is created from a template located in
+ ``meta/lib/bblayers/templates/layer.conf``.
+ In fact, none of the variables set in ``layer.conf`` are mandatory,
+ except when :term:`BBFILE_COLLECTIONS` is present. In this case
+ :term:`LAYERSERIES_COMPAT` and :term:`BBFILE_PATTERN` have to be
+ defined too.
+
+#. *Add Content:* Depending on the type of layer, add the content. If
+ the layer adds support for a machine, add the machine configuration
+ in a ``conf/machine/`` file within the layer. If the layer adds
+ distro policy, add the distro configuration in a ``conf/distro/``
+ file within the layer. If the layer introduces new recipes, put the
+ recipes you need in ``recipes-*`` subdirectories within the layer.
+
+ .. note::
+
+ For an explanation of layer hierarchy that is compliant with the
+ Yocto Project, see the ":ref:`bsp-guide/bsp:example filesystem layout`"
+ section in the Yocto Project Board Support Package (BSP) Developer's Guide.
+
+#. *Optionally Test for Compatibility:* If you want permission to use
+ the Yocto Project Compatibility logo with your layer or application
+ that uses your layer, perform the steps to apply for compatibility.
+ See the
+ ":ref:`dev-manual/layers:making sure your layer is compatible with yocto project`"
+ section for more information.
+
+Following Best Practices When Creating Layers
+=============================================
+
+To create layers that are easier to maintain and that will not impact
+builds for other machines, you should consider the information in the
+following list:
+
+- *Avoid "Overlaying" Entire Recipes from Other Layers in Your
+ Configuration:* In other words, do not copy an entire recipe into
+ your layer and then modify it. Rather, use an append file
+ (``.bbappend``) to override only those parts of the original recipe
+ you need to modify.
+
+- *Avoid Duplicating Include Files:* Use append files (``.bbappend``)
+ for each recipe that uses an include file. Or, if you are introducing
+ a new recipe that requires the included file, use the path relative
+ to the original layer directory to refer to the file. For example,
+ use ``require recipes-core/``\ `package`\ ``/``\ `file`\ ``.inc`` instead
+ of ``require`` `file`\ ``.inc``. If you're finding you have to overlay
+ the include file, it could indicate a deficiency in the include file
+ in the layer to which it originally belongs. If this is the case, you
+ should try to address that deficiency instead of overlaying the
+ include file. For example, you could address this by getting the
+ maintainer of the include file to add a variable or variables to make
+ it easy to override the parts needing to be overridden.
+
+- *Structure Your Layers:* Proper use of overrides within append files
+ and placement of machine-specific files within your layer can ensure
+ that a build is not using the wrong Metadata and negatively impacting
+ a build for a different machine. Here are some examples:
+
+ - *Modify Variables to Support a Different Machine:* Suppose you
+ have a layer named ``meta-one`` that adds support for building
+ machine "one". To do so, you use an append file named
+ ``base-files.bbappend`` and create a dependency on "foo" by
+ altering the :term:`DEPENDS`
+ variable::
+
+ DEPENDS = "foo"
+
+ The dependency is created during any
+ build that includes the layer ``meta-one``. However, you might not
+ want this dependency for all machines. For example, suppose you
+ are building for machine "two" but your ``bblayers.conf`` file has
+ the ``meta-one`` layer included. During the build, the
+ ``base-files`` for machine "two" will also have the dependency on
+ ``foo``.
+
+ To make sure your changes apply only when building machine "one",
+ use a machine override with the :term:`DEPENDS` statement::
+
+ DEPENDS:one = "foo"
+
+ You should follow the same strategy when using ``:append``
+ and ``:prepend`` operations::
+
+ DEPENDS:append:one = " foo"
+ DEPENDS:prepend:one = "foo "
+
+ As an actual example, here's a
+ snippet from the generic kernel include file ``linux-yocto.inc``,
+ wherein the kernel compile and link options are adjusted in the
+ case of a subset of the supported architectures::
+
+ DEPENDS:append:aarch64 = " libgcc"
+ KERNEL_CC:append:aarch64 = " ${TOOLCHAIN_OPTIONS}"
+ KERNEL_LD:append:aarch64 = " ${TOOLCHAIN_OPTIONS}"
+
+ DEPENDS:append:nios2 = " libgcc"
+ KERNEL_CC:append:nios2 = " ${TOOLCHAIN_OPTIONS}"
+ KERNEL_LD:append:nios2 = " ${TOOLCHAIN_OPTIONS}"
+
+ DEPENDS:append:arc = " libgcc"
+ KERNEL_CC:append:arc = " ${TOOLCHAIN_OPTIONS}"
+ KERNEL_LD:append:arc = " ${TOOLCHAIN_OPTIONS}"
+
+ KERNEL_FEATURES:append:qemuall=" features/debug/printk.scc"
+
+ - *Place Machine-Specific Files in Machine-Specific Locations:* When
+ you have a base recipe, such as ``base-files.bb``, that contains a
+ :term:`SRC_URI` statement to a
+ file, you can use an append file to cause the build to use your
+ own version of the file. For example, an append file in your layer
+ at ``meta-one/recipes-core/base-files/base-files.bbappend`` could
+ extend :term:`FILESPATH` using :term:`FILESEXTRAPATHS` as follows::
+
+ FILESEXTRAPATHS:prepend := "${THISDIR}/${BPN}:"
+
+ The build for machine "one" will pick up your machine-specific file as
+ long as you have the file in
+ ``meta-one/recipes-core/base-files/base-files/``. However, if you
+ are building for a different machine and the ``bblayers.conf``
+ file includes the ``meta-one`` layer and the location of your
+ machine-specific file is the first location where that file is
+ found according to :term:`FILESPATH`, builds for all machines will
+ also use that machine-specific file.
+
+ You can make sure that a machine-specific file is used for a
+ particular machine by putting the file in a subdirectory specific
+ to the machine. For example, rather than placing the file in
+ ``meta-one/recipes-core/base-files/base-files/`` as shown above,
+ put it in ``meta-one/recipes-core/base-files/base-files/one/``.
+ Not only does this make sure the file is used only when building
+ for machine "one", but the build process locates the file more
+ quickly.
+
+ In summary, you need to place all files referenced from
+ :term:`SRC_URI` in a machine-specific subdirectory within the layer in
+ order to restrict those files to machine-specific builds.
+
+- *Perform Steps to Apply for Yocto Project Compatibility:* If you want
+ permission to use the Yocto Project Compatibility logo with your
+ layer or application that uses your layer, perform the steps to apply
+ for compatibility. See the
+ ":ref:`dev-manual/layers:making sure your layer is compatible with yocto project`"
+ section for more information.
+
+- *Follow the Layer Naming Convention:* Store custom layers in a Git
+ repository that use the ``meta-layer_name`` format.
+
+- *Group Your Layers Locally:* Clone your repository alongside other
+ cloned ``meta`` directories from the :term:`Source Directory`.
+
+Making Sure Your Layer is Compatible With Yocto Project
+=======================================================
+
+When you create a layer used with the Yocto Project, it is advantageous
+to make sure that the layer interacts well with existing Yocto Project
+layers (i.e. the layer is compatible with the Yocto Project). Ensuring
+compatibility makes the layer easy to be consumed by others in the Yocto
+Project community and could allow you permission to use the Yocto
+Project Compatible Logo.
+
+.. note::
+
+ Only Yocto Project member organizations are permitted to use the
+ Yocto Project Compatible Logo. The logo is not available for general
+ use. For information on how to become a Yocto Project member
+ organization, see the :yocto_home:`Yocto Project Website <>`.
+
+The Yocto Project Compatibility Program consists of a layer application
+process that requests permission to use the Yocto Project Compatibility
+Logo for your layer and application. The process consists of two parts:
+
+#. Successfully passing a script (``yocto-check-layer``) that when run
+ against your layer, tests it against constraints based on experiences
+ of how layers have worked in the real world and where pitfalls have
+ been found. Getting a "PASS" result from the script is required for
+ successful compatibility registration.
+
+#. Completion of an application acceptance form, which you can find at
+ :yocto_home:`/compatible-registration/`.
+
+To be granted permission to use the logo, you need to satisfy the
+following:
+
+- Be able to check the box indicating that you got a "PASS" when
+ running the script against your layer.
+
+- Answer "Yes" to the questions on the form or have an acceptable
+ explanation for any questions answered "No".
+
+- Be a Yocto Project Member Organization.
+
+The remainder of this section presents information on the registration
+form and on the ``yocto-check-layer`` script.
+
+Yocto Project Compatible Program Application
+--------------------------------------------
+
+Use the form to apply for your layer's approval. Upon successful
+application, you can use the Yocto Project Compatibility Logo with your
+layer and the application that uses your layer.
+
+To access the form, use this link:
+:yocto_home:`/compatible-registration`.
+Follow the instructions on the form to complete your application.
+
+The application consists of the following sections:
+
+- *Contact Information:* Provide your contact information as the fields
+ require. Along with your information, provide the released versions
+ of the Yocto Project for which your layer is compatible.
+
+- *Acceptance Criteria:* Provide "Yes" or "No" answers for each of the
+ items in the checklist. There is space at the bottom of the form for
+ any explanations for items for which you answered "No".
+
+- *Recommendations:* Provide answers for the questions regarding Linux
+ kernel use and build success.
+
+``yocto-check-layer`` Script
+----------------------------
+
+The ``yocto-check-layer`` script provides you a way to assess how
+compatible your layer is with the Yocto Project. You should run this
+script prior to using the form to apply for compatibility as described
+in the previous section. You need to achieve a "PASS" result in order to
+have your application form successfully processed.
+
+The script divides tests into three areas: COMMON, BSP, and DISTRO. For
+example, given a distribution layer (DISTRO), the layer must pass both
+the COMMON and DISTRO related tests. Furthermore, if your layer is a BSP
+layer, the layer must pass the COMMON and BSP set of tests.
+
+To execute the script, enter the following commands from your build
+directory::
+
+ $ source oe-init-build-env
+ $ yocto-check-layer your_layer_directory
+
+Be sure to provide the actual directory for your
+layer as part of the command.
+
+Entering the command causes the script to determine the type of layer
+and then to execute a set of specific tests against the layer. The
+following list overviews the test:
+
+- ``common.test_readme``: Tests if a ``README`` file exists in the
+ layer and the file is not empty.
+
+- ``common.test_parse``: Tests to make sure that BitBake can parse the
+ files without error (i.e. ``bitbake -p``).
+
+- ``common.test_show_environment``: Tests that the global or per-recipe
+ environment is in order without errors (i.e. ``bitbake -e``).
+
+- ``common.test_world``: Verifies that ``bitbake world`` works.
+
+- ``common.test_signatures``: Tests to be sure that BSP and DISTRO
+ layers do not come with recipes that change signatures.
+
+- ``common.test_layerseries_compat``: Verifies layer compatibility is
+ set properly.
+
+- ``bsp.test_bsp_defines_machines``: Tests if a BSP layer has machine
+ configurations.
+
+- ``bsp.test_bsp_no_set_machine``: Tests to ensure a BSP layer does not
+ set the machine when the layer is added.
+
+- ``bsp.test_machine_world``: Verifies that ``bitbake world`` works
+ regardless of which machine is selected.
+
+- ``bsp.test_machine_signatures``: Verifies that building for a
+ particular machine affects only the signature of tasks specific to
+ that machine.
+
+- ``distro.test_distro_defines_distros``: Tests if a DISTRO layer has
+ distro configurations.
+
+- ``distro.test_distro_no_set_distros``: Tests to ensure a DISTRO layer
+ does not set the distribution when the layer is added.
+
+Enabling Your Layer
+===================
+
+Before the OpenEmbedded build system can use your new layer, you need to
+enable it. To enable your layer, simply add your layer's path to the
+:term:`BBLAYERS` variable in your ``conf/bblayers.conf`` file, which is
+found in the :term:`Build Directory`. The following example shows how to
+enable your new ``meta-mylayer`` layer (note how your new layer exists
+outside of the official ``poky`` repository which you would have checked
+out earlier)::
+
+ # POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
+ # changes incompatibly
+ POKY_BBLAYERS_CONF_VERSION = "2"
+ BBPATH = "${TOPDIR}"
+ BBFILES ?= ""
+ BBLAYERS ?= " \
+ /home/user/poky/meta \
+ /home/user/poky/meta-poky \
+ /home/user/poky/meta-yocto-bsp \
+ /home/user/mystuff/meta-mylayer \
+ "
+
+BitBake parses each ``conf/layer.conf`` file from the top down as
+specified in the :term:`BBLAYERS` variable within the ``conf/bblayers.conf``
+file. During the processing of each ``conf/layer.conf`` file, BitBake
+adds the recipes, classes and configurations contained within the
+particular layer to the source directory.
+
+Appending Other Layers Metadata With Your Layer
+===============================================
+
+A recipe that appends Metadata to another recipe is called a BitBake
+append file. A BitBake append file uses the ``.bbappend`` file type
+suffix, while the corresponding recipe to which Metadata is being
+appended uses the ``.bb`` file type suffix.
+
+You can use a ``.bbappend`` file in your layer to make additions or
+changes to the content of another layer's recipe without having to copy
+the other layer's recipe into your layer. Your ``.bbappend`` file
+resides in your layer, while the main ``.bb`` recipe file to which you
+are appending Metadata resides in a different layer.
+
+Being able to append information to an existing recipe not only avoids
+duplication, but also automatically applies recipe changes from a
+different layer into your layer. If you were copying recipes, you would
+have to manually merge changes as they occur.
+
+When you create an append file, you must use the same root name as the
+corresponding recipe file. For example, the append file
+``someapp_3.1.bbappend`` must apply to ``someapp_3.1.bb``. This
+means the original recipe and append filenames are version
+number-specific. If the corresponding recipe is renamed to update to a
+newer version, you must also rename and possibly update the
+corresponding ``.bbappend`` as well. During the build process, BitBake
+displays an error on starting if it detects a ``.bbappend`` file that
+does not have a corresponding recipe with a matching name. See the
+:term:`BB_DANGLINGAPPENDS_WARNONLY`
+variable for information on how to handle this error.
+
+Overlaying a File Using Your Layer
+----------------------------------
+
+As an example, consider the main formfactor recipe and a corresponding
+formfactor append file both from the :term:`Source Directory`.
+Here is the main
+formfactor recipe, which is named ``formfactor_0.0.bb`` and located in
+the "meta" layer at ``meta/recipes-bsp/formfactor``::
+
+ SUMMARY = "Device formfactor information"
+ DESCRIPTION = "A formfactor configuration file provides information about the \
+ target hardware for which the image is being built and information that the \
+ build system cannot obtain from other sources such as the kernel."
+ SECTION = "base"
+ LICENSE = "MIT"
+ LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
+ PR = "r45"
+
+ SRC_URI = "file://config file://machconfig"
+ S = "${WORKDIR}"
+
+ PACKAGE_ARCH = "${MACHINE_ARCH}"
+ INHIBIT_DEFAULT_DEPS = "1"
+
+ do_install() {
+ # Install file only if it has contents
+ install -d ${D}${sysconfdir}/formfactor/
+ install -m 0644 ${S}/config ${D}${sysconfdir}/formfactor/
+ if [ -s "${S}/machconfig" ]; then
+ install -m 0644 ${S}/machconfig ${D}${sysconfdir}/formfactor/
+ fi
+ }
+
+In the main recipe, note the :term:`SRC_URI`
+variable, which tells the OpenEmbedded build system where to find files
+during the build.
+
+Here is the append file, which is named ``formfactor_0.0.bbappend``
+and is from the Raspberry Pi BSP Layer named ``meta-raspberrypi``. The
+file is in the layer at ``recipes-bsp/formfactor``::
+
+ FILESEXTRAPATHS:prepend := "${THISDIR}/${PN}:"
+
+By default, the build system uses the
+:term:`FILESPATH` variable to
+locate files. This append file extends the locations by setting the
+:term:`FILESEXTRAPATHS`
+variable. Setting this variable in the ``.bbappend`` file is the most
+reliable and recommended method for adding directories to the search
+path used by the build system to find files.
+
+The statement in this example extends the directories to include
+``${``\ :term:`THISDIR`\ ``}/${``\ :term:`PN`\ ``}``,
+which resolves to a directory named ``formfactor`` in the same directory
+in which the append file resides (i.e.
+``meta-raspberrypi/recipes-bsp/formfactor``. This implies that you must
+have the supporting directory structure set up that will contain any
+files or patches you will be including from the layer.
+
+Using the immediate expansion assignment operator ``:=`` is important
+because of the reference to :term:`THISDIR`. The trailing colon character is
+important as it ensures that items in the list remain colon-separated.
+
+.. note::
+
+ BitBake automatically defines the :term:`THISDIR` variable. You should
+ never set this variable yourself. Using ":prepend" as part of the
+ :term:`FILESEXTRAPATHS` ensures your path will be searched prior to other
+ paths in the final list.
+
+ Also, not all append files add extra files. Many append files simply
+ allow to add build options (e.g. ``systemd``). For these cases, your
+ append file would not even use the :term:`FILESEXTRAPATHS` statement.
+
+The end result of this ``.bbappend`` file is that on a Raspberry Pi, where
+``rpi`` will exist in the list of :term:`OVERRIDES`, the file
+``meta-raspberrypi/recipes-bsp/formfactor/formfactor/rpi/machconfig`` will be
+used during :ref:`ref-tasks-fetch` and the test for a non-zero file size in
+:ref:`ref-tasks-install` will return true, and the file will be installed.
+
+Installing Additional Files Using Your Layer
+--------------------------------------------
+
+As another example, consider the main ``xserver-xf86-config`` recipe and a
+corresponding ``xserver-xf86-config`` append file both from the :term:`Source
+Directory`. Here is the main ``xserver-xf86-config`` recipe, which is named
+``xserver-xf86-config_0.1.bb`` and located in the "meta" layer at
+``meta/recipes-graphics/xorg-xserver``::
+
+ SUMMARY = "X.Org X server configuration file"
+ HOMEPAGE = "http://www.x.org"
+ SECTION = "x11/base"
+ LICENSE = "MIT"
+ LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
+ PR = "r33"
+
+ SRC_URI = "file://xorg.conf"
+
+ S = "${WORKDIR}"
+
+ CONFFILES:${PN} = "${sysconfdir}/X11/xorg.conf"
+
+ PACKAGE_ARCH = "${MACHINE_ARCH}"
+ ALLOW_EMPTY:${PN} = "1"
+
+ do_install () {
+ if test -s ${WORKDIR}/xorg.conf; then
+ install -d ${D}/${sysconfdir}/X11
+ install -m 0644 ${WORKDIR}/xorg.conf ${D}/${sysconfdir}/X11/
+ fi
+ }
+
+Here is the append file, which is named ``xserver-xf86-config_%.bbappend``
+and is from the Raspberry Pi BSP Layer named ``meta-raspberrypi``. The
+file is in the layer at ``recipes-graphics/xorg-xserver``::
+
+ FILESEXTRAPATHS:prepend := "${THISDIR}/${PN}:"
+
+ SRC_URI:append:rpi = " \
+ file://xorg.conf.d/98-pitft.conf \
+ file://xorg.conf.d/99-calibration.conf \
+ "
+ do_install:append:rpi () {
+ PITFT="${@bb.utils.contains("MACHINE_FEATURES", "pitft", "1", "0", d)}"
+ if [ "${PITFT}" = "1" ]; then
+ install -d ${D}/${sysconfdir}/X11/xorg.conf.d/
+ install -m 0644 ${WORKDIR}/xorg.conf.d/98-pitft.conf ${D}/${sysconfdir}/X11/xorg.conf.d/
+ install -m 0644 ${WORKDIR}/xorg.conf.d/99-calibration.conf ${D}/${sysconfdir}/X11/xorg.conf.d/
+ fi
+ }
+
+ FILES:${PN}:append:rpi = " ${sysconfdir}/X11/xorg.conf.d/*"
+
+Building off of the previous example, we once again are setting the
+:term:`FILESEXTRAPATHS` variable. In this case we are also using
+:term:`SRC_URI` to list additional source files to use when ``rpi`` is found in
+the list of :term:`OVERRIDES`. The :ref:`ref-tasks-install` task will then perform a
+check for an additional :term:`MACHINE_FEATURES` that if set will cause these
+additional files to be installed. These additional files are listed in
+:term:`FILES` so that they will be packaged.
+
+Prioritizing Your Layer
+=======================
+
+Each layer is assigned a priority value. Priority values control which
+layer takes precedence if there are recipe files with the same name in
+multiple layers. For these cases, the recipe file from the layer with a
+higher priority number takes precedence. Priority values also affect the
+order in which multiple ``.bbappend`` files for the same recipe are
+applied. You can either specify the priority manually, or allow the
+build system to calculate it based on the layer's dependencies.
+
+To specify the layer's priority manually, use the
+:term:`BBFILE_PRIORITY`
+variable and append the layer's root name::
+
+ BBFILE_PRIORITY_mylayer = "1"
+
+.. note::
+
+ It is possible for a recipe with a lower version number
+ :term:`PV` in a layer that has a higher
+ priority to take precedence.
+
+ Also, the layer priority does not currently affect the precedence
+ order of ``.conf`` or ``.bbclass`` files. Future versions of BitBake
+ might address this.
+
+Managing Layers
+===============
+
+You can use the BitBake layer management tool ``bitbake-layers`` to
+provide a view into the structure of recipes across a multi-layer
+project. Being able to generate output that reports on configured layers
+with their paths and priorities and on ``.bbappend`` files and their
+applicable recipes can help to reveal potential problems.
+
+For help on the BitBake layer management tool, use the following
+command::
+
+ $ bitbake-layers --help
+
+The following list describes the available commands:
+
+- ``help:`` Displays general help or help on a specified command.
+
+- ``show-layers:`` Shows the current configured layers.
+
+- ``show-overlayed:`` Lists overlayed recipes. A recipe is overlayed
+ when a recipe with the same name exists in another layer that has a
+ higher layer priority.
+
+- ``show-recipes:`` Lists available recipes and the layers that
+ provide them.
+
+- ``show-appends:`` Lists ``.bbappend`` files and the recipe files to
+ which they apply.
+
+- ``show-cross-depends:`` Lists dependency relationships between
+ recipes that cross layer boundaries.
+
+- ``add-layer:`` Adds a layer to ``bblayers.conf``.
+
+- ``remove-layer:`` Removes a layer from ``bblayers.conf``
+
+- ``flatten:`` Flattens the layer configuration into a separate
+ output directory. Flattening your layer configuration builds a
+ "flattened" directory that contains the contents of all layers, with
+ any overlayed recipes removed and any ``.bbappend`` files appended to
+ the corresponding recipes. You might have to perform some manual
+ cleanup of the flattened layer as follows:
+
+ - Non-recipe files (such as patches) are overwritten. The flatten
+ command shows a warning for these files.
+
+ - Anything beyond the normal layer setup has been added to the
+ ``layer.conf`` file. Only the lowest priority layer's
+ ``layer.conf`` is used.
+
+ - Overridden and appended items from ``.bbappend`` files need to be
+ cleaned up. The contents of each ``.bbappend`` end up in the
+ flattened recipe. However, if there are appended or changed
+ variable values, you need to tidy these up yourself. Consider the
+ following example. Here, the ``bitbake-layers`` command adds the
+ line ``#### bbappended ...`` so that you know where the following
+ lines originate::
+
+ ...
+ DESCRIPTION = "A useful utility"
+ ...
+ EXTRA_OECONF = "--enable-something"
+ ...
+
+ #### bbappended from meta-anotherlayer ####
+
+ DESCRIPTION = "Customized utility"
+ EXTRA_OECONF += "--enable-somethingelse"
+
+
+ Ideally, you would tidy up these utilities as follows::
+
+ ...
+ DESCRIPTION = "Customized utility"
+ ...
+ EXTRA_OECONF = "--enable-something --enable-somethingelse"
+ ...
+
+- ``layerindex-fetch``: Fetches a layer from a layer index, along
+ with its dependent layers, and adds the layers to the
+ ``conf/bblayers.conf`` file.
+
+- ``layerindex-show-depends``: Finds layer dependencies from the
+ layer index.
+
+- ``save-build-conf``: Saves the currently active build configuration
+ (``conf/local.conf``, ``conf/bblayers.conf``) as a template into a layer.
+ This template can later be used for setting up builds via :term:`TEMPLATECONF`.
+ For information about saving and using configuration templates, see
+ ":ref:`dev-manual/custom-template-configuration-directory:creating a custom template configuration directory`".
+
+- ``create-layer``: Creates a basic layer.
+
+- ``create-layers-setup``: Writes out a configuration file and/or a script that
+ can replicate the directory structure and revisions of the layers in a current build.
+ For more information, see ":ref:`dev-manual/layers:saving and restoring the layers setup`".
+
+Creating a General Layer Using the ``bitbake-layers`` Script
+============================================================
+
+The ``bitbake-layers`` script with the ``create-layer`` subcommand
+simplifies creating a new general layer.
+
+.. note::
+
+ - For information on BSP layers, see the ":ref:`bsp-guide/bsp:bsp layers`"
+ section in the Yocto
+ Project Board Specific (BSP) Developer's Guide.
+
+ - In order to use a layer with the OpenEmbedded build system, you
+ need to add the layer to your ``bblayers.conf`` configuration
+ file. See the ":ref:`dev-manual/layers:adding a layer using the \`\`bitbake-layers\`\` script`"
+ section for more information.
+
+The default mode of the script's operation with this subcommand is to
+create a layer with the following:
+
+- A layer priority of 6.
+
+- A ``conf`` subdirectory that contains a ``layer.conf`` file.
+
+- A ``recipes-example`` subdirectory that contains a further
+ subdirectory named ``example``, which contains an ``example.bb``
+ recipe file.
+
+- A ``COPYING.MIT``, which is the license statement for the layer. The
+ script assumes you want to use the MIT license, which is typical for
+ most layers, for the contents of the layer itself.
+
+- A ``README`` file, which is a file describing the contents of your
+ new layer.
+
+In its simplest form, you can use the following command form to create a
+layer. The command creates a layer whose name corresponds to
+"your_layer_name" in the current directory::
+
+ $ bitbake-layers create-layer your_layer_name
+
+As an example, the following command creates a layer named ``meta-scottrif``
+in your home directory::
+
+ $ cd /usr/home
+ $ bitbake-layers create-layer meta-scottrif
+ NOTE: Starting bitbake server...
+ Add your new layer with 'bitbake-layers add-layer meta-scottrif'
+
+If you want to set the priority of the layer to other than the default
+value of "6", you can either use the ``--priority`` option or you
+can edit the
+:term:`BBFILE_PRIORITY` value
+in the ``conf/layer.conf`` after the script creates it. Furthermore, if
+you want to give the example recipe file some name other than the
+default, you can use the ``--example-recipe-name`` option.
+
+The easiest way to see how the ``bitbake-layers create-layer`` command
+works is to experiment with the script. You can also read the usage
+information by entering the following::
+
+ $ bitbake-layers create-layer --help
+ NOTE: Starting bitbake server...
+ usage: bitbake-layers create-layer [-h] [--priority PRIORITY]
+ [--example-recipe-name EXAMPLERECIPE]
+ layerdir
+
+ Create a basic layer
+
+ positional arguments:
+ layerdir Layer directory to create
+
+ optional arguments:
+ -h, --help show this help message and exit
+ --priority PRIORITY, -p PRIORITY
+ Layer directory to create
+ --example-recipe-name EXAMPLERECIPE, -e EXAMPLERECIPE
+ Filename of the example recipe
+
+Adding a Layer Using the ``bitbake-layers`` Script
+==================================================
+
+Once you create your general layer, you must add it to your
+``bblayers.conf`` file. Adding the layer to this configuration file
+makes the OpenEmbedded build system aware of your layer so that it can
+search it for metadata.
+
+Add your layer by using the ``bitbake-layers add-layer`` command::
+
+ $ bitbake-layers add-layer your_layer_name
+
+Here is an example that adds a
+layer named ``meta-scottrif`` to the configuration file. Following the
+command that adds the layer is another ``bitbake-layers`` command that
+shows the layers that are in your ``bblayers.conf`` file::
+
+ $ bitbake-layers add-layer meta-scottrif
+ NOTE: Starting bitbake server...
+ Parsing recipes: 100% |##########################################################| Time: 0:00:49
+ Parsing of 1441 .bb files complete (0 cached, 1441 parsed). 2055 targets, 56 skipped, 0 masked, 0 errors.
+ $ bitbake-layers show-layers
+ NOTE: Starting bitbake server...
+ layer path priority
+ ==========================================================================
+ meta /home/scottrif/poky/meta 5
+ meta-poky /home/scottrif/poky/meta-poky 5
+ meta-yocto-bsp /home/scottrif/poky/meta-yocto-bsp 5
+ workspace /home/scottrif/poky/build/workspace 99
+ meta-scottrif /home/scottrif/poky/build/meta-scottrif 6
+
+
+Adding the layer to this file
+enables the build system to locate the layer during the build.
+
+.. note::
+
+ During a build, the OpenEmbedded build system looks in the layers
+ from the top of the list down to the bottom in that order.
+
+Saving and restoring the layers setup
+=====================================
+
+Once you have a working build with the correct set of layers, it is beneficial
+to capture the layer setup --- what they are, which repositories they come from
+and which SCM revisions they're at --- into a configuration file, so that this
+setup can be easily replicated later, perhaps on a different machine. Here's
+how to do this::
+
+ $ bitbake-layers create-layers-setup /srv/work/alex/meta-alex/
+ NOTE: Starting bitbake server...
+ NOTE: Created /srv/work/alex/meta-alex/setup-layers.json
+ NOTE: Created /srv/work/alex/meta-alex/setup-layers
+
+The tool needs a single argument which tells where to place the output, consisting
+of a json formatted layer configuration, and a ``setup-layers`` script that can use that configuration
+to restore the layers in a different location, or on a different host machine. The argument
+can point to a custom layer (which is then deemed a "bootstrap" layer that needs to be
+checked out first), or into a completely independent location.
+
+The replication of the layers is performed by running the ``setup-layers`` script provided
+above:
+
+#. Clone the bootstrap layer or some other repository to obtain
+ the json config and the setup script that can use it.
+
+#. Run the script directly with no options::
+
+ alex@Zen2:/srv/work/alex/my-build$ meta-alex/setup-layers
+ Note: not checking out source meta-alex, use --force-bootstraplayer-checkout to override.
+
+ Setting up source meta-intel, revision 15.0-hardknott-3.3-310-g0a96edae, branch master
+ Running 'git init -q /srv/work/alex/my-build/meta-intel'
+ Running 'git remote remove origin > /dev/null 2>&1; git remote add origin git://git.yoctoproject.org/meta-intel' in /srv/work/alex/my-build/meta-intel
+ Running 'git fetch -q origin || true' in /srv/work/alex/my-build/meta-intel
+ Running 'git checkout -q 0a96edae609a3f48befac36af82cf1eed6786b4a' in /srv/work/alex/my-build/meta-intel
+
+ Setting up source poky, revision 4.1_M1-372-g55483d28f2, branch akanavin/setup-layers
+ Running 'git init -q /srv/work/alex/my-build/poky'
+ Running 'git remote remove origin > /dev/null 2>&1; git remote add origin git://git.yoctoproject.org/poky' in /srv/work/alex/my-build/poky
+ Running 'git fetch -q origin || true' in /srv/work/alex/my-build/poky
+ Running 'git remote remove poky-contrib > /dev/null 2>&1; git remote add poky-contrib ssh://git@push.yoctoproject.org/poky-contrib' in /srv/work/alex/my-build/poky
+ Running 'git fetch -q poky-contrib || true' in /srv/work/alex/my-build/poky
+ Running 'git checkout -q 11db0390b02acac1324e0f827beb0e2e3d0d1d63' in /srv/work/alex/my-build/poky
+
+.. note::
+ This will work to update an existing checkout as well.
+
+.. note::
+ The script is self-sufficient and requires only python3
+ and git on the build machine.
+
+.. note::
+ Both the ``create-layers-setup`` and the ``setup-layers`` provided several additional options
+ that customize their behavior - you are welcome to study them via ``--help`` command line parameter.
+
diff --git a/documentation/dev-manual/libraries.rst b/documentation/dev-manual/libraries.rst
new file mode 100644
index 0000000000..521dbb9a7c
--- /dev/null
+++ b/documentation/dev-manual/libraries.rst
@@ -0,0 +1,267 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Working With Libraries
+**********************
+
+Libraries are an integral part of your system. This section describes
+some common practices you might find helpful when working with libraries
+to build your system:
+
+- :ref:`How to include static library files
+ <dev-manual/libraries:including static library files>`
+
+- :ref:`How to use the Multilib feature to combine multiple versions of
+ library files into a single image
+ <dev-manual/libraries:combining multiple versions of library files into one image>`
+
+- :ref:`How to install multiple versions of the same library in parallel on
+ the same system
+ <dev-manual/libraries:installing multiple versions of the same library>`
+
+Including Static Library Files
+==============================
+
+If you are building a library and the library offers static linking, you
+can control which static library files (``*.a`` files) get included in
+the built library.
+
+The :term:`PACKAGES` and
+:term:`FILES:* <FILES>` variables in the
+``meta/conf/bitbake.conf`` configuration file define how files installed
+by the :ref:`ref-tasks-install` task are packaged. By default, the :term:`PACKAGES`
+variable includes ``${PN}-staticdev``, which represents all static
+library files.
+
+.. note::
+
+ Some previously released versions of the Yocto Project defined the
+ static library files through ``${PN}-dev``.
+
+Here is the part of the BitBake configuration file, where you can see
+how the static library files are defined::
+
+ PACKAGE_BEFORE_PN ?= ""
+ PACKAGES = "${PN}-src ${PN}-dbg ${PN}-staticdev ${PN}-dev ${PN}-doc ${PN}-locale ${PACKAGE_BEFORE_PN} ${PN}"
+ PACKAGES_DYNAMIC = "^${PN}-locale-.*"
+ FILES = ""
+
+ FILES:${PN} = "${bindir}/* ${sbindir}/* ${libexecdir}/* ${libdir}/lib*${SOLIBS} \
+ ${sysconfdir} ${sharedstatedir} ${localstatedir} \
+ ${base_bindir}/* ${base_sbindir}/* \
+ ${base_libdir}/*${SOLIBS} \
+ ${base_prefix}/lib/udev ${prefix}/lib/udev \
+ ${base_libdir}/udev ${libdir}/udev \
+ ${datadir}/${BPN} ${libdir}/${BPN}/* \
+ ${datadir}/pixmaps ${datadir}/applications \
+ ${datadir}/idl ${datadir}/omf ${datadir}/sounds \
+ ${libdir}/bonobo/servers"
+
+ FILES:${PN}-bin = "${bindir}/* ${sbindir}/*"
+
+ FILES:${PN}-doc = "${docdir} ${mandir} ${infodir} ${datadir}/gtk-doc \
+ ${datadir}/gnome/help"
+ SECTION:${PN}-doc = "doc"
+
+ FILES_SOLIBSDEV ?= "${base_libdir}/lib*${SOLIBSDEV} ${libdir}/lib*${SOLIBSDEV}"
+ FILES:${PN}-dev = "${includedir} ${FILES_SOLIBSDEV} ${libdir}/*.la \
+ ${libdir}/*.o ${libdir}/pkgconfig ${datadir}/pkgconfig \
+ ${datadir}/aclocal ${base_libdir}/*.o \
+ ${libdir}/${BPN}/*.la ${base_libdir}/*.la \
+ ${libdir}/cmake ${datadir}/cmake"
+ SECTION:${PN}-dev = "devel"
+ ALLOW_EMPTY:${PN}-dev = "1"
+ RDEPENDS:${PN}-dev = "${PN} (= ${EXTENDPKGV})"
+
+ FILES:${PN}-staticdev = "${libdir}/*.a ${base_libdir}/*.a ${libdir}/${BPN}/*.a"
+ SECTION:${PN}-staticdev = "devel"
+ RDEPENDS:${PN}-staticdev = "${PN}-dev (= ${EXTENDPKGV})"
+
+Combining Multiple Versions of Library Files into One Image
+===========================================================
+
+The build system offers the ability to build libraries with different
+target optimizations or architecture formats and combine these together
+into one system image. You can link different binaries in the image
+against the different libraries as needed for specific use cases. This
+feature is called "Multilib".
+
+An example would be where you have most of a system compiled in 32-bit
+mode using 32-bit libraries, but you have something large, like a
+database engine, that needs to be a 64-bit application and uses 64-bit
+libraries. Multilib allows you to get the best of both 32-bit and 64-bit
+libraries.
+
+While the Multilib feature is most commonly used for 32 and 64-bit
+differences, the approach the build system uses facilitates different
+target optimizations. You could compile some binaries to use one set of
+libraries and other binaries to use a different set of libraries. The
+libraries could differ in architecture, compiler options, or other
+optimizations.
+
+There are several examples in the ``meta-skeleton`` layer found in the
+:term:`Source Directory`:
+
+- :oe_git:`conf/multilib-example.conf </openembedded-core/tree/meta-skeleton/conf/multilib-example.conf>`
+ configuration file.
+
+- :oe_git:`conf/multilib-example2.conf </openembedded-core/tree/meta-skeleton/conf/multilib-example2.conf>`
+ configuration file.
+
+- :oe_git:`recipes-multilib/images/core-image-multilib-example.bb </openembedded-core/tree/meta-skeleton/recipes-multilib/images/core-image-multilib-example.bb>`
+ recipe
+
+Preparing to Use Multilib
+-------------------------
+
+User-specific requirements drive the Multilib feature. Consequently,
+there is no one "out-of-the-box" configuration that would
+meet your needs.
+
+In order to enable Multilib, you first need to ensure your recipe is
+extended to support multiple libraries. Many standard recipes are
+already extended and support multiple libraries. You can check in the
+``meta/conf/multilib.conf`` configuration file in the
+:term:`Source Directory` to see how this is
+done using the
+:term:`BBCLASSEXTEND` variable.
+Eventually, all recipes will be covered and this list will not be
+needed.
+
+For the most part, the :ref:`Multilib <ref-classes-multilib*>`
+class extension works automatically to
+extend the package name from ``${PN}`` to ``${MLPREFIX}${PN}``, where
+:term:`MLPREFIX` is the particular multilib (e.g. "lib32-" or "lib64-").
+Standard variables such as
+:term:`DEPENDS`,
+:term:`RDEPENDS`,
+:term:`RPROVIDES`,
+:term:`RRECOMMENDS`,
+:term:`PACKAGES`, and
+:term:`PACKAGES_DYNAMIC` are
+automatically extended by the system. If you are extending any manual
+code in the recipe, you can use the ``${MLPREFIX}`` variable to ensure
+those names are extended correctly.
+
+Using Multilib
+--------------
+
+After you have set up the recipes, you need to define the actual
+combination of multiple libraries you want to build. You accomplish this
+through your ``local.conf`` configuration file in the
+:term:`Build Directory`. An example configuration would be as follows::
+
+ MACHINE = "qemux86-64"
+ require conf/multilib.conf
+ MULTILIBS = "multilib:lib32"
+ DEFAULTTUNE:virtclass-multilib-lib32 = "x86"
+ IMAGE_INSTALL:append = " lib32-glib-2.0"
+
+This example enables an additional library named
+``lib32`` alongside the normal target packages. When combining these
+"lib32" alternatives, the example uses "x86" for tuning. For information
+on this particular tuning, see
+``meta/conf/machine/include/ia32/arch-ia32.inc``.
+
+The example then includes ``lib32-glib-2.0`` in all the images, which
+illustrates one method of including a multiple library dependency. You
+can use a normal image build to include this dependency, for example::
+
+ $ bitbake core-image-sato
+
+You can also build Multilib packages
+specifically with a command like this::
+
+ $ bitbake lib32-glib-2.0
+
+Additional Implementation Details
+---------------------------------
+
+There are generic implementation details as well as details that are specific to
+package management systems. Here are implementation details
+that exist regardless of the package management system:
+
+- The typical convention used for the class extension code as used by
+ Multilib assumes that all package names specified in
+ :term:`PACKAGES` that contain
+ ``${PN}`` have ``${PN}`` at the start of the name. When that
+ convention is not followed and ``${PN}`` appears at the middle or the
+ end of a name, problems occur.
+
+- The :term:`TARGET_VENDOR`
+ value under Multilib will be extended to "-vendormlmultilib" (e.g.
+ "-pokymllib32" for a "lib32" Multilib with Poky). The reason for this
+ slightly unwieldy contraction is that any "-" characters in the
+ vendor string presently break Autoconf's ``config.sub``, and other
+ separators are problematic for different reasons.
+
+Here are the implementation details for the RPM Package Management System:
+
+- A unique architecture is defined for the Multilib packages, along
+ with creating a unique deploy folder under ``tmp/deploy/rpm`` in the
+ :term:`Build Directory`. For example, consider ``lib32`` in a
+ ``qemux86-64`` image. The possible architectures in the system are "all",
+ "qemux86_64", "lib32:qemux86_64", and "lib32:x86".
+
+- The ``${MLPREFIX}`` variable is stripped from ``${PN}`` during RPM
+ packaging. The naming for a normal RPM package and a Multilib RPM
+ package in a ``qemux86-64`` system resolves to something similar to
+ ``bash-4.1-r2.x86_64.rpm`` and ``bash-4.1.r2.lib32_x86.rpm``,
+ respectively.
+
+- When installing a Multilib image, the RPM backend first installs the
+ base image and then installs the Multilib libraries.
+
+- The build system relies on RPM to resolve the identical files in the
+ two (or more) Multilib packages.
+
+Here are the implementation details for the IPK Package Management System:
+
+- The ``${MLPREFIX}`` is not stripped from ``${PN}`` during IPK
+ packaging. The naming for a normal RPM package and a Multilib IPK
+ package in a ``qemux86-64`` system resolves to something like
+ ``bash_4.1-r2.x86_64.ipk`` and ``lib32-bash_4.1-rw:x86.ipk``,
+ respectively.
+
+- The IPK deploy folder is not modified with ``${MLPREFIX}`` because
+ packages with and without the Multilib feature can exist in the same
+ folder due to the ``${PN}`` differences.
+
+- IPK defines a sanity check for Multilib installation using certain
+ rules for file comparison, overridden, etc.
+
+Installing Multiple Versions of the Same Library
+================================================
+
+There are be situations where you need to install and use multiple versions
+of the same library on the same system at the same time. This
+almost always happens when a library API changes and you have
+multiple pieces of software that depend on the separate versions of the
+library. To accommodate these situations, you can install multiple
+versions of the same library in parallel on the same system.
+
+The process is straightforward as long as the libraries use proper
+versioning. With properly versioned libraries, all you need to do to
+individually specify the libraries is create separate, appropriately
+named recipes where the :term:`PN` part of
+the name includes a portion that differentiates each library version
+(e.g. the major part of the version number). Thus, instead of having a
+single recipe that loads one version of a library (e.g. ``clutter``),
+you provide multiple recipes that result in different versions of the
+libraries you want. As an example, the following two recipes would allow
+the two separate versions of the ``clutter`` library to co-exist on the
+same system:
+
+.. code-block:: none
+
+ clutter-1.6_1.6.20.bb
+ clutter-1.8_1.8.4.bb
+
+Additionally, if
+you have other recipes that depend on a given library, you need to use
+the :term:`DEPENDS` variable to
+create the dependency. Continuing with the same example, if you want to
+have a recipe depend on the 1.8 version of the ``clutter`` library, use
+the following in your recipe::
+
+ DEPENDS = "clutter-1.8"
+
diff --git a/documentation/dev-manual/licenses.rst b/documentation/dev-manual/licenses.rst
new file mode 100644
index 0000000000..bffff3675f
--- /dev/null
+++ b/documentation/dev-manual/licenses.rst
@@ -0,0 +1,544 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Working With Licenses
+*********************
+
+As mentioned in the ":ref:`overview-manual/development-environment:licensing`"
+section in the Yocto Project Overview and Concepts Manual, open source
+projects are open to the public and they consequently have different
+licensing structures in place. This section describes the mechanism by
+which the :term:`OpenEmbedded Build System`
+tracks changes to
+licensing text and covers how to maintain open source license compliance
+during your project's lifecycle. The section also describes how to
+enable commercially licensed recipes, which by default are disabled.
+
+Tracking License Changes
+========================
+
+The license of an upstream project might change in the future. In order
+to prevent these changes going unnoticed, the
+:term:`LIC_FILES_CHKSUM`
+variable tracks changes to the license text. The checksums are validated
+at the end of the configure step, and if the checksums do not match, the
+build will fail.
+
+Specifying the ``LIC_FILES_CHKSUM`` Variable
+--------------------------------------------
+
+The :term:`LIC_FILES_CHKSUM` variable contains checksums of the license text
+in the source code for the recipe. Here is an example of how to
+specify :term:`LIC_FILES_CHKSUM`::
+
+ LIC_FILES_CHKSUM = "file://COPYING;md5=xxxx \
+ file://licfile1.txt;beginline=5;endline=29;md5=yyyy \
+ file://licfile2.txt;endline=50;md5=zzzz \
+ ..."
+
+.. note::
+
+ - When using "beginline" and "endline", realize that line numbering
+ begins with one and not zero. Also, the included lines are
+ inclusive (i.e. lines five through and including 29 in the
+ previous example for ``licfile1.txt``).
+
+ - When a license check fails, the selected license text is included
+ as part of the QA message. Using this output, you can determine
+ the exact start and finish for the needed license text.
+
+The build system uses the :term:`S`
+variable as the default directory when searching files listed in
+:term:`LIC_FILES_CHKSUM`. The previous example employs the default
+directory.
+
+Consider this next example::
+
+ LIC_FILES_CHKSUM = "file://src/ls.c;beginline=5;endline=16;\
+ md5=bb14ed3c4cda583abc85401304b5cd4e"
+ LIC_FILES_CHKSUM = "file://${WORKDIR}/license.html;md5=5c94767cedb5d6987c902ac850ded2c6"
+
+The first line locates a file in ``${S}/src/ls.c`` and isolates lines
+five through 16 as license text. The second line refers to a file in
+:term:`WORKDIR`.
+
+Note that :term:`LIC_FILES_CHKSUM` variable is mandatory for all recipes,
+unless the :term:`LICENSE` variable is set to "CLOSED".
+
+Explanation of Syntax
+---------------------
+
+As mentioned in the previous section, the :term:`LIC_FILES_CHKSUM` variable
+lists all the important files that contain the license text for the
+source code. It is possible to specify a checksum for an entire file, or
+a specific section of a file (specified by beginning and ending line
+numbers with the "beginline" and "endline" parameters, respectively).
+The latter is useful for source files with a license notice header,
+README documents, and so forth. If you do not use the "beginline"
+parameter, then it is assumed that the text begins on the first line of
+the file. Similarly, if you do not use the "endline" parameter, it is
+assumed that the license text ends with the last line of the file.
+
+The "md5" parameter stores the md5 checksum of the license text. If the
+license text changes in any way as compared to this parameter then a
+mismatch occurs. This mismatch triggers a build failure and notifies the
+developer. Notification allows the developer to review and address the
+license text changes. Also note that if a mismatch occurs during the
+build, the correct md5 checksum is placed in the build log and can be
+easily copied to the recipe.
+
+There is no limit to how many files you can specify using the
+:term:`LIC_FILES_CHKSUM` variable. Generally, however, every project
+requires a few specifications for license tracking. Many projects have a
+"COPYING" file that stores the license information for all the source
+code files. This practice allows you to just track the "COPYING" file as
+long as it is kept up to date.
+
+.. note::
+
+ - If you specify an empty or invalid "md5" parameter,
+ :term:`BitBake` returns an md5
+ mis-match error and displays the correct "md5" parameter value
+ during the build. The correct parameter is also captured in the
+ build log.
+
+ - If the whole file contains only license text, you do not need to
+ use the "beginline" and "endline" parameters.
+
+Enabling Commercially Licensed Recipes
+======================================
+
+By default, the OpenEmbedded build system disables components that have
+commercial or other special licensing requirements. Such requirements
+are defined on a recipe-by-recipe basis through the
+:term:`LICENSE_FLAGS` variable
+definition in the affected recipe. For instance, the
+``poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly`` recipe
+contains the following statement::
+
+ LICENSE_FLAGS = "commercial"
+
+Here is a
+slightly more complicated example that contains both an explicit recipe
+name and version (after variable expansion)::
+
+ LICENSE_FLAGS = "license_${PN}_${PV}"
+
+It is possible to give more details about a specific license
+using flags on the :term:`LICENSE_FLAGS_DETAILS` variable::
+
+ LICENSE_FLAGS_DETAILS[my-eula-license] = "For further details, see https://example.com/eula."
+
+If set, this will be displayed to the user if the license hasn't been accepted.
+
+In order for a component restricted by a
+:term:`LICENSE_FLAGS` definition to be enabled and included in an image, it
+needs to have a matching entry in the global
+:term:`LICENSE_FLAGS_ACCEPTED`
+variable, which is a variable typically defined in your ``local.conf``
+file. For example, to enable the
+``poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly`` package, you
+could add either the string "commercial_gst-plugins-ugly" or the more
+general string "commercial" to :term:`LICENSE_FLAGS_ACCEPTED`. See the
+":ref:`dev-manual/licenses:license flag matching`" section for a full
+explanation of how :term:`LICENSE_FLAGS` matching works. Here is the
+example::
+
+ LICENSE_FLAGS_ACCEPTED = "commercial_gst-plugins-ugly"
+
+Likewise, to additionally enable the package built from the recipe
+containing ``LICENSE_FLAGS = "license_${PN}_${PV}"``, and assuming that
+the actual recipe name was ``emgd_1.10.bb``, the following string would
+enable that package as well as the original ``gst-plugins-ugly``
+package::
+
+ LICENSE_FLAGS_ACCEPTED = "commercial_gst-plugins-ugly license_emgd_1.10"
+
+As a convenience, you do not need to specify the
+complete license string for every package. You can use
+an abbreviated form, which consists of just the first portion or
+portions of the license string before the initial underscore character
+or characters. A partial string will match any license that contains the
+given string as the first portion of its license. For example, the
+following value will also match both of the packages
+previously mentioned as well as any other packages that have licenses
+starting with "commercial" or "license"::
+
+ LICENSE_FLAGS_ACCEPTED = "commercial license"
+
+License Flag Matching
+---------------------
+
+License flag matching allows you to control what recipes the
+OpenEmbedded build system includes in the build. Fundamentally, the
+build system attempts to match :term:`LICENSE_FLAGS` strings found in
+recipes against strings found in :term:`LICENSE_FLAGS_ACCEPTED`.
+A match causes the build system to include a recipe in the
+build, while failure to find a match causes the build system to exclude
+a recipe.
+
+In general, license flag matching is simple. However, understanding some
+concepts will help you correctly and effectively use matching.
+
+Before a flag defined by a particular recipe is tested against the
+entries of :term:`LICENSE_FLAGS_ACCEPTED`, the expanded
+string ``_${PN}`` is appended to the flag. This expansion makes each
+:term:`LICENSE_FLAGS` value recipe-specific. After expansion, the
+string is then matched against the entries. Thus, specifying
+``LICENSE_FLAGS = "commercial"`` in recipe "foo", for example, results
+in the string ``"commercial_foo"``. And, to create a match, that string
+must appear among the entries of :term:`LICENSE_FLAGS_ACCEPTED`.
+
+Judicious use of the :term:`LICENSE_FLAGS` strings and the contents of the
+:term:`LICENSE_FLAGS_ACCEPTED` variable allows you a lot of flexibility for
+including or excluding recipes based on licensing. For example, you can
+broaden the matching capabilities by using license flags string subsets
+in :term:`LICENSE_FLAGS_ACCEPTED`.
+
+.. note::
+
+ When using a string subset, be sure to use the part of the expanded
+ string that precedes the appended underscore character (e.g.
+ ``usethispart_1.3``, ``usethispart_1.4``, and so forth).
+
+For example, simply specifying the string "commercial" in the
+:term:`LICENSE_FLAGS_ACCEPTED` variable matches any expanded
+:term:`LICENSE_FLAGS` definition that starts with the string
+"commercial" such as "commercial_foo" and "commercial_bar", which
+are the strings the build system automatically generates for
+hypothetical recipes named "foo" and "bar" assuming those recipes simply
+specify the following::
+
+ LICENSE_FLAGS = "commercial"
+
+Thus, you can choose to exhaustively enumerate each license flag in the
+list and allow only specific recipes into the image, or you can use a
+string subset that causes a broader range of matches to allow a range of
+recipes into the image.
+
+This scheme works even if the :term:`LICENSE_FLAGS` string already has
+``_${PN}`` appended. For example, the build system turns the license
+flag "commercial_1.2_foo" into "commercial_1.2_foo_foo" and would match
+both the general "commercial" and the specific "commercial_1.2_foo"
+strings found in the :term:`LICENSE_FLAGS_ACCEPTED` variable, as expected.
+
+Here are some other scenarios:
+
+- You can specify a versioned string in the recipe such as
+ "commercial_foo_1.2" in a "foo" recipe. The build system expands this
+ string to "commercial_foo_1.2_foo". Combine this license flag with a
+ :term:`LICENSE_FLAGS_ACCEPTED` variable that has the string
+ "commercial" and you match the flag along with any other flag that
+ starts with the string "commercial".
+
+- Under the same circumstances, you can add "commercial_foo" in the
+ :term:`LICENSE_FLAGS_ACCEPTED` variable and the build system not only
+ matches "commercial_foo_1.2" but also matches any license flag with
+ the string "commercial_foo", regardless of the version.
+
+- You can be very specific and use both the package and version parts
+ in the :term:`LICENSE_FLAGS_ACCEPTED` list (e.g.
+ "commercial_foo_1.2") to specifically match a versioned recipe.
+
+Other Variables Related to Commercial Licenses
+----------------------------------------------
+
+There are other helpful variables related to commercial license handling,
+defined in the
+``poky/meta/conf/distro/include/default-distrovars.inc`` file::
+
+ COMMERCIAL_AUDIO_PLUGINS ?= ""
+ COMMERCIAL_VIDEO_PLUGINS ?= ""
+
+If you want to enable these components, you can do so by making sure you have
+statements similar to the following in your ``local.conf`` configuration file::
+
+ COMMERCIAL_AUDIO_PLUGINS = "gst-plugins-ugly-mad \
+ gst-plugins-ugly-mpegaudioparse"
+ COMMERCIAL_VIDEO_PLUGINS = "gst-plugins-ugly-mpeg2dec \
+ gst-plugins-ugly-mpegstream gst-plugins-bad-mpegvideoparse"
+ LICENSE_FLAGS_ACCEPTED = "commercial_gst-plugins-ugly commercial_gst-plugins-bad commercial_qmmp"
+
+Of course, you could also create a matching list for those components using the
+more general "commercial" string in the :term:`LICENSE_FLAGS_ACCEPTED` variable,
+but that would also enable all the other packages with :term:`LICENSE_FLAGS`
+containing "commercial", which you may or may not want::
+
+ LICENSE_FLAGS_ACCEPTED = "commercial"
+
+Specifying audio and video plugins as part of the
+:term:`COMMERCIAL_AUDIO_PLUGINS` and :term:`COMMERCIAL_VIDEO_PLUGINS` statements
+(along with :term:`LICENSE_FLAGS_ACCEPTED`) includes the plugins or
+components into built images, thus adding support for media formats or
+components.
+
+.. note::
+
+ GStreamer "ugly" and "bad" plugins are actually available through
+ open source licenses. However, the "ugly" ones can be subject to software
+ patents in some countries, making it necessary to pay licensing fees
+ to distribute them. The "bad" ones are just deemed unreliable by the
+ GStreamer community and should therefore be used with care.
+
+Maintaining Open Source License Compliance During Your Product's Lifecycle
+==========================================================================
+
+One of the concerns for a development organization using open source
+software is how to maintain compliance with various open source
+licensing during the lifecycle of the product. While this section does
+not provide legal advice or comprehensively cover all scenarios, it does
+present methods that you can use to assist you in meeting the compliance
+requirements during a software release.
+
+With hundreds of different open source licenses that the Yocto Project
+tracks, it is difficult to know the requirements of each and every
+license. However, the requirements of the major FLOSS licenses can begin
+to be covered by assuming that there are three main areas of concern:
+
+- Source code must be provided.
+
+- License text for the software must be provided.
+
+- Compilation scripts and modifications to the source code must be
+ provided.
+
+There are other requirements beyond the scope of these three and the
+methods described in this section (e.g. the mechanism through which
+source code is distributed).
+
+As different organizations have different ways of releasing software,
+there can be multiple ways of meeting license obligations. At
+least, we describe here two methods for achieving compliance:
+
+- The first method is to use OpenEmbedded's ability to provide
+ the source code, provide a list of licenses, as well as
+ compilation scripts and source code modifications.
+
+ The remainder of this section describes supported methods to meet
+ the previously mentioned three requirements.
+
+- The second method is to generate a *Software Bill of Materials*
+ (:term:`SBoM`), as described in the ":doc:`/dev-manual/sbom`" section.
+ Not only do you generate :term:`SPDX` output which can be used meet
+ license compliance requirements (except for sharing the build system
+ and layers sources for the time being), but this output also includes
+ component version and patch information which can be used
+ for vulnerability assessment.
+
+Whatever method you choose, prior to releasing images, sources,
+and the build system, you should audit all artifacts to ensure
+completeness.
+
+.. note::
+
+ The Yocto Project generates a license manifest during image creation
+ that is located in
+ ``${DEPLOY_DIR}/licenses/${SSTATE_PKGARCH}/<image-name>-<machine>.rootfs-<datestamp>/``
+ to assist with any audits.
+
+Providing the Source Code
+-------------------------
+
+Compliance activities should begin before you generate the final image.
+The first thing you should look at is the requirement that tops the list
+for most compliance groups --- providing the source. The Yocto Project has
+a few ways of meeting this requirement.
+
+One of the easiest ways to meet this requirement is to provide the
+entire :term:`DL_DIR` used by the
+build. This method, however, has a few issues. The most obvious is the
+size of the directory since it includes all sources used in the build
+and not just the source used in the released image. It will include
+toolchain source, and other artifacts, which you would not generally
+release. However, the more serious issue for most companies is
+accidental release of proprietary software. The Yocto Project provides
+an :ref:`ref-classes-archiver` class to help avoid some of these concerns.
+
+Before you employ :term:`DL_DIR` or the :ref:`ref-classes-archiver` class, you
+need to decide how you choose to provide source. The source
+:ref:`ref-classes-archiver` class can generate tarballs and SRPMs and can
+create them with various levels of compliance in mind.
+
+One way of doing this (but certainly not the only way) is to release
+just the source as a tarball. You can do this by adding the following to
+the ``local.conf`` file found in the :term:`Build Directory`::
+
+ INHERIT += "archiver"
+ ARCHIVER_MODE[src] = "original"
+
+During the creation of your
+image, the source from all recipes that deploy packages to the image is
+placed within subdirectories of ``DEPLOY_DIR/sources`` based on the
+:term:`LICENSE` for each recipe.
+Releasing the entire directory enables you to comply with requirements
+concerning providing the unmodified source. It is important to note that
+the size of the directory can get large.
+
+A way to help mitigate the size issue is to only release tarballs for
+licenses that require the release of source. Let us assume you are only
+concerned with GPL code as identified by running the following script:
+
+.. code-block:: shell
+
+ # Script to archive a subset of packages matching specific license(s)
+ # Source and license files are copied into sub folders of package folder
+ # Must be run from build folder
+ #!/bin/bash
+ src_release_dir="source-release"
+ mkdir -p $src_release_dir
+ for a in tmp/deploy/sources/*; do
+ for d in $a/*; do
+ # Get package name from path
+ p=`basename $d`
+ p=${p%-*}
+ p=${p%-*}
+ # Only archive GPL packages (update *GPL* regex for your license check)
+ numfiles=`ls tmp/deploy/licenses/$p/*GPL* 2> /dev/null | wc -l`
+ if [ $numfiles -ge 1 ]; then
+ echo Archiving $p
+ mkdir -p $src_release_dir/$p/source
+ cp $d/* $src_release_dir/$p/source 2> /dev/null
+ mkdir -p $src_release_dir/$p/license
+ cp tmp/deploy/licenses/$p/* $src_release_dir/$p/license 2> /dev/null
+ fi
+ done
+ done
+
+At this point, you
+could create a tarball from the ``gpl_source_release`` directory and
+provide that to the end user. This method would be a step toward
+achieving compliance with section 3a of GPLv2 and with section 6 of
+GPLv3.
+
+Providing License Text
+----------------------
+
+One requirement that is often overlooked is inclusion of license text.
+This requirement also needs to be dealt with prior to generating the
+final image. Some licenses require the license text to accompany the
+binary. You can achieve this by adding the following to your
+``local.conf`` file::
+
+ COPY_LIC_MANIFEST = "1"
+ COPY_LIC_DIRS = "1"
+ LICENSE_CREATE_PACKAGE = "1"
+
+Adding these statements to the
+configuration file ensures that the licenses collected during package
+generation are included on your image.
+
+.. note::
+
+ Setting all three variables to "1" results in the image having two
+ copies of the same license file. One copy resides in
+ ``/usr/share/common-licenses`` and the other resides in
+ ``/usr/share/license``.
+
+ The reason for this behavior is because
+ :term:`COPY_LIC_DIRS` and
+ :term:`COPY_LIC_MANIFEST`
+ add a copy of the license when the image is built but do not offer a
+ path for adding licenses for newly installed packages to an image.
+ :term:`LICENSE_CREATE_PACKAGE`
+ adds a separate package and an upgrade path for adding licenses to an
+ image.
+
+As the source :ref:`ref-classes-archiver` class has already archived the
+original unmodified source that contains the license files, you would have
+already met the requirements for inclusion of the license information
+with source as defined by the GPL and other open source licenses.
+
+Providing Compilation Scripts and Source Code Modifications
+-----------------------------------------------------------
+
+At this point, we have addressed all we need prior to generating the
+image. The next two requirements are addressed during the final
+packaging of the release.
+
+By releasing the version of the OpenEmbedded build system and the layers
+used during the build, you will be providing both compilation scripts
+and the source code modifications in one step.
+
+If the deployment team has a :ref:`overview-manual/concepts:bsp layer`
+and a distro layer, and those
+those layers are used to patch, compile, package, or modify (in any way)
+any open source software included in your released images, you might be
+required to release those layers under section 3 of GPLv2 or section 1
+of GPLv3. One way of doing that is with a clean checkout of the version
+of the Yocto Project and layers used during your build. Here is an
+example:
+
+.. code-block:: shell
+
+ # We built using the dunfell branch of the poky repo
+ $ git clone -b dunfell git://git.yoctoproject.org/poky
+ $ cd poky
+ # We built using the release_branch for our layers
+ $ git clone -b release_branch git://git.mycompany.com/meta-my-bsp-layer
+ $ git clone -b release_branch git://git.mycompany.com/meta-my-software-layer
+ # clean up the .git repos
+ $ find . -name ".git" -type d -exec rm -rf {} \;
+
+One thing a development organization might want to consider for end-user
+convenience is to modify
+``meta-poky/conf/templates/default/bblayers.conf.sample`` to ensure that when
+the end user utilizes the released build system to build an image, the
+development organization's layers are included in the ``bblayers.conf`` file
+automatically::
+
+ # POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
+ # changes incompatibly
+ POKY_BBLAYERS_CONF_VERSION = "2"
+
+ BBPATH = "${TOPDIR}"
+ BBFILES ?= ""
+
+ BBLAYERS ?= " \
+ ##OEROOT##/meta \
+ ##OEROOT##/meta-poky \
+ ##OEROOT##/meta-yocto-bsp \
+ ##OEROOT##/meta-mylayer \
+ "
+
+Creating and
+providing an archive of the :term:`Metadata`
+layers (recipes, configuration files, and so forth) enables you to meet
+your requirements to include the scripts to control compilation as well
+as any modifications to the original source.
+
+Compliance Limitations with Executables Built from Static Libraries
+-------------------------------------------------------------------
+
+When package A is added to an image via the :term:`RDEPENDS` or :term:`RRECOMMENDS`
+mechanisms as well as explicitly included in the image recipe with
+:term:`IMAGE_INSTALL`, and depends on a static linked library recipe B
+(``DEPENDS += "B"``), package B will neither appear in the generated license
+manifest nor in the generated source tarballs. This occurs as the
+:ref:`ref-classes-license` and :ref:`ref-classes-archiver` classes assume that
+only packages included via :term:`RDEPENDS` or :term:`RRECOMMENDS`
+end up in the image.
+
+As a result, potential obligations regarding license compliance for package B
+may not be met.
+
+The Yocto Project doesn't enable static libraries by default, in part because
+of this issue. Before a solution to this limitation is found, you need to
+keep in mind that if your root filesystem is built from static libraries,
+you will need to manually ensure that your deliveries are compliant
+with the licenses of these libraries.
+
+Copying Non Standard Licenses
+=============================
+
+Some packages, such as the linux-firmware package, have many licenses
+that are not in any way common. You can avoid adding a lot of these
+types of common license files, which are only applicable to a specific
+package, by using the
+:term:`NO_GENERIC_LICENSE`
+variable. Using this variable also avoids QA errors when you use a
+non-common, non-CLOSED license in a recipe.
+
+Here is an example that uses the ``LICENSE.Abilis.txt`` file as
+the license from the fetched source::
+
+ NO_GENERIC_LICENSE[Firmware-Abilis] = "LICENSE.Abilis.txt"
+
diff --git a/documentation/dev-manual/new-machine.rst b/documentation/dev-manual/new-machine.rst
new file mode 100644
index 0000000000..469b2d395a
--- /dev/null
+++ b/documentation/dev-manual/new-machine.rst
@@ -0,0 +1,118 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Adding a New Machine
+********************
+
+Adding a new machine to the Yocto Project is a straightforward process.
+This section describes how to add machines that are similar to those
+that the Yocto Project already supports.
+
+.. note::
+
+ Although well within the capabilities of the Yocto Project, adding a
+ totally new architecture might require changes to ``gcc``/``glibc``
+ and to the site information, which is beyond the scope of this
+ manual.
+
+For a complete example that shows how to add a new machine, see the
+":ref:`bsp-guide/bsp:creating a new bsp layer using the \`\`bitbake-layers\`\` script`"
+section in the Yocto Project Board Support Package (BSP) Developer's
+Guide.
+
+Adding the Machine Configuration File
+=====================================
+
+To add a new machine, you need to add a new machine configuration file
+to the layer's ``conf/machine`` directory. This configuration file
+provides details about the device you are adding.
+
+The OpenEmbedded build system uses the root name of the machine
+configuration file to reference the new machine. For example, given a
+machine configuration file named ``crownbay.conf``, the build system
+recognizes the machine as "crownbay".
+
+The most important variables you must set in your machine configuration
+file or include from a lower-level configuration file are as follows:
+
+- :term:`TARGET_ARCH` (e.g. "arm")
+
+- ``PREFERRED_PROVIDER_virtual/kernel``
+
+- :term:`MACHINE_FEATURES` (e.g. "screen wifi")
+
+You might also need these variables:
+
+- :term:`SERIAL_CONSOLES` (e.g. "115200;ttyS0 115200;ttyS1")
+
+- :term:`KERNEL_IMAGETYPE` (e.g. "zImage")
+
+- :term:`IMAGE_FSTYPES` (e.g. "tar.gz jffs2")
+
+You can find full details on these variables in the reference section.
+You can leverage existing machine ``.conf`` files from
+``meta-yocto-bsp/conf/machine/``.
+
+Adding a Kernel for the Machine
+===============================
+
+The OpenEmbedded build system needs to be able to build a kernel for the
+machine. You need to either create a new kernel recipe for this machine,
+or extend an existing kernel recipe. You can find several kernel recipe
+examples in the Source Directory at ``meta/recipes-kernel/linux`` that
+you can use as references.
+
+If you are creating a new kernel recipe, normal recipe-writing rules
+apply for setting up a :term:`SRC_URI`. Thus, you need to specify any
+necessary patches and set :term:`S` to point at the source code. You need to
+create a :ref:`ref-tasks-configure` task that configures the unpacked kernel with
+a ``defconfig`` file. You can do this by using a ``make defconfig``
+command or, more commonly, by copying in a suitable ``defconfig`` file
+and then running ``make oldconfig``. By making use of ``inherit kernel``
+and potentially some of the ``linux-*.inc`` files, most other
+functionality is centralized and the defaults of the class normally work
+well.
+
+If you are extending an existing kernel recipe, it is usually a matter
+of adding a suitable ``defconfig`` file. The file needs to be added into
+a location similar to ``defconfig`` files used for other machines in a
+given kernel recipe. A possible way to do this is by listing the file in
+the :term:`SRC_URI` and adding the machine to the expression in
+:term:`COMPATIBLE_MACHINE`::
+
+ COMPATIBLE_MACHINE = '(qemux86|qemumips)'
+
+For more information on ``defconfig`` files, see the
+":ref:`kernel-dev/common:changing the configuration`"
+section in the Yocto Project Linux Kernel Development Manual.
+
+Adding a Formfactor Configuration File
+======================================
+
+A formfactor configuration file provides information about the target
+hardware for which the image is being built and information that the
+build system cannot obtain from other sources such as the kernel. Some
+examples of information contained in a formfactor configuration file
+include framebuffer orientation, whether or not the system has a
+keyboard, the positioning of the keyboard in relation to the screen, and
+the screen resolution.
+
+The build system uses reasonable defaults in most cases. However, if
+customization is necessary, you need to create a ``machconfig`` file in
+the ``meta/recipes-bsp/formfactor/files`` directory. This directory
+contains directories for specific machines such as ``qemuarm`` and
+``qemux86``. For information about the settings available and the
+defaults, see the ``meta/recipes-bsp/formfactor/files/config`` file
+found in the same area.
+
+Here is an example for "qemuarm" machine::
+
+ HAVE_TOUCHSCREEN=1
+ HAVE_KEYBOARD=1
+ DISPLAY_CAN_ROTATE=0
+ DISPLAY_ORIENTATION=0
+ #DISPLAY_WIDTH_PIXELS=640
+ #DISPLAY_HEIGHT_PIXELS=480
+ #DISPLAY_BPP=16
+ DISPLAY_DPI=150
+ DISPLAY_SUBPIXEL_ORDER=vrgb
+
diff --git a/documentation/dev-manual/new-recipe.rst b/documentation/dev-manual/new-recipe.rst
new file mode 100644
index 0000000000..61fc2eb122
--- /dev/null
+++ b/documentation/dev-manual/new-recipe.rst
@@ -0,0 +1,1639 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Writing a New Recipe
+********************
+
+Recipes (``.bb`` files) are fundamental components in the Yocto Project
+environment. Each software component built by the OpenEmbedded build
+system requires a recipe to define the component. This section describes
+how to create, write, and test a new recipe.
+
+.. note::
+
+ For information on variables that are useful for recipes and for
+ information about recipe naming issues, see the
+ ":ref:`ref-manual/varlocality:recipes`" section of the Yocto Project
+ Reference Manual.
+
+Overview
+========
+
+The following figure shows the basic process for creating a new recipe.
+The remainder of the section provides details for the steps.
+
+.. image:: figures/recipe-workflow.png
+ :align: center
+ :width: 50%
+
+Locate or Automatically Create a Base Recipe
+============================================
+
+You can always write a recipe from scratch. However, there are three choices
+that can help you quickly get started with a new recipe:
+
+- ``devtool add``: A command that assists in creating a recipe and an
+ environment conducive to development.
+
+- ``recipetool create``: A command provided by the Yocto Project that
+ automates creation of a base recipe based on the source files.
+
+- *Existing Recipes:* Location and modification of an existing recipe
+ that is similar in function to the recipe you need.
+
+.. note::
+
+ For information on recipe syntax, see the
+ ":ref:`dev-manual/new-recipe:recipe syntax`" section.
+
+Creating the Base Recipe Using ``devtool add``
+----------------------------------------------
+
+The ``devtool add`` command uses the same logic for auto-creating the
+recipe as ``recipetool create``, which is listed below. Additionally,
+however, ``devtool add`` sets up an environment that makes it easy for
+you to patch the source and to make changes to the recipe as is often
+necessary when adding a recipe to build a new piece of software to be
+included in a build.
+
+You can find a complete description of the ``devtool add`` command in
+the ":ref:`sdk-manual/extensible:a closer look at \`\`devtool add\`\``" section
+in the Yocto Project Application Development and the Extensible Software
+Development Kit (eSDK) manual.
+
+Creating the Base Recipe Using ``recipetool create``
+----------------------------------------------------
+
+``recipetool create`` automates creation of a base recipe given a set of
+source code files. As long as you can extract or point to the source
+files, the tool will construct a recipe and automatically configure all
+pre-build information into the recipe. For example, suppose you have an
+application that builds using Autotools. Creating the base recipe using
+``recipetool`` results in a recipe that has the pre-build dependencies,
+license requirements, and checksums configured.
+
+To run the tool, you just need to be in your :term:`Build Directory` and
+have sourced the build environment setup script (i.e.
+:ref:`structure-core-script`). To get help on the tool, use the following
+command::
+
+ $ recipetool -h
+ NOTE: Starting bitbake server...
+ usage: recipetool [-d] [-q] [--color COLOR] [-h] <subcommand> ...
+
+ OpenEmbedded recipe tool
+
+ options:
+ -d, --debug Enable debug output
+ -q, --quiet Print only errors
+ --color COLOR Colorize output (where COLOR is auto, always, never)
+ -h, --help show this help message and exit
+
+ subcommands:
+ create Create a new recipe
+ newappend Create a bbappend for the specified target in the specified
+ layer
+ setvar Set a variable within a recipe
+ appendfile Create/update a bbappend to replace a target file
+ appendsrcfiles Create/update a bbappend to add or replace source files
+ appendsrcfile Create/update a bbappend to add or replace a source file
+ Use recipetool <subcommand> --help to get help on a specific command
+
+Running ``recipetool create -o OUTFILE`` creates the base recipe and
+locates it properly in the layer that contains your source files.
+Here are some syntax examples:
+
+ - Use this syntax to generate a recipe based on source. Once generated,
+ the recipe resides in the existing source code layer::
+
+ recipetool create -o OUTFILE source
+
+ - Use this syntax to generate a recipe using code that
+ you extract from source. The extracted code is placed in its own layer
+ defined by :term:`EXTERNALSRC`::
+
+ recipetool create -o OUTFILE -x EXTERNALSRC source
+
+ - Use this syntax to generate a recipe based on source. The options
+ direct ``recipetool`` to generate debugging information. Once generated,
+ the recipe resides in the existing source code layer::
+
+ recipetool create -d -o OUTFILE source
+
+Locating and Using a Similar Recipe
+-----------------------------------
+
+Before writing a recipe from scratch, it is often useful to discover
+whether someone else has already written one that meets (or comes close
+to meeting) your needs. The Yocto Project and OpenEmbedded communities
+maintain many recipes that might be candidates for what you are doing.
+You can find a good central index of these recipes in the
+:oe_layerindex:`OpenEmbedded Layer Index <>`.
+
+Working from an existing recipe or a skeleton recipe is the best way to
+get started. Here are some points on both methods:
+
+- *Locate and modify a recipe that is close to what you want to do:*
+ This method works when you are familiar with the current recipe
+ space. The method does not work so well for those new to the Yocto
+ Project or writing recipes.
+
+ Some risks associated with this method are using a recipe that has
+ areas totally unrelated to what you are trying to accomplish with
+ your recipe, not recognizing areas of the recipe that you might have
+ to add from scratch, and so forth. All these risks stem from
+ unfamiliarity with the existing recipe space.
+
+- *Use and modify the following skeleton recipe:* If for some reason
+ you do not want to use ``recipetool`` and you cannot find an existing
+ recipe that is close to meeting your needs, you can use the following
+ structure to provide the fundamental areas of a new recipe::
+
+ DESCRIPTION = ""
+ HOMEPAGE = ""
+ LICENSE = ""
+ SECTION = ""
+ DEPENDS = ""
+ LIC_FILES_CHKSUM = ""
+
+ SRC_URI = ""
+
+Storing and Naming the Recipe
+=============================
+
+Once you have your base recipe, you should put it in your own layer and
+name it appropriately. Locating it correctly ensures that the
+OpenEmbedded build system can find it when you use BitBake to process
+the recipe.
+
+- *Storing Your Recipe:* The OpenEmbedded build system locates your
+ recipe through the layer's ``conf/layer.conf`` file and the
+ :term:`BBFILES` variable. This
+ variable sets up a path from which the build system can locate
+ recipes. Here is the typical use::
+
+ BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
+ ${LAYERDIR}/recipes-*/*/*.bbappend"
+
+ Consequently, you need to be sure you locate your new recipe inside
+ your layer such that it can be found.
+
+ You can find more information on how layers are structured in the
+ ":ref:`dev-manual/layers:understanding and creating layers`" section.
+
+- *Naming Your Recipe:* When you name your recipe, you need to follow
+ this naming convention::
+
+ basename_version.bb
+
+ Use lower-cased characters and do not include the reserved suffixes
+ ``-native``, ``-cross``, ``-initial``, or ``-dev`` casually (i.e. do not use
+ them as part of your recipe name unless the string applies). Here are some
+ examples:
+
+ .. code-block:: none
+
+ cups_1.7.0.bb
+ gawk_4.0.2.bb
+ irssi_0.8.16-rc1.bb
+
+Running a Build on the Recipe
+=============================
+
+Creating a new recipe is usually an iterative process that requires
+using BitBake to process the recipe multiple times in order to
+progressively discover and add information to the recipe file.
+
+Assuming you have sourced the build environment setup script (i.e.
+:ref:`structure-core-script`) and you are in the :term:`Build Directory`, use
+BitBake to process your recipe. All you need to provide is the
+``basename`` of the recipe as described in the previous section::
+
+ $ bitbake basename
+
+During the build, the OpenEmbedded build system creates a temporary work
+directory for each recipe
+(``${``\ :term:`WORKDIR`\ ``}``)
+where it keeps extracted source files, log files, intermediate
+compilation and packaging files, and so forth.
+
+The path to the per-recipe temporary work directory depends on the
+context in which it is being built. The quickest way to find this path
+is to have BitBake return it by running the following::
+
+ $ bitbake -e basename | grep ^WORKDIR=
+
+As an example, assume a Source Directory
+top-level folder named ``poky``, a default :term:`Build Directory` at
+``poky/build``, and a ``qemux86-poky-linux`` machine target system.
+Furthermore, suppose your recipe is named ``foo_1.3.0.bb``. In this
+case, the work directory the build system uses to build the package
+would be as follows::
+
+ poky/build/tmp/work/qemux86-poky-linux/foo/1.3.0-r0
+
+Inside this directory you can find sub-directories such as ``image``,
+``packages-split``, and ``temp``. After the build, you can examine these
+to determine how well the build went.
+
+.. note::
+
+ You can find log files for each task in the recipe's ``temp``
+ directory (e.g. ``poky/build/tmp/work/qemux86-poky-linux/foo/1.3.0-r0/temp``).
+ Log files are named ``log.taskname`` (e.g. ``log.do_configure``,
+ ``log.do_fetch``, and ``log.do_compile``).
+
+You can find more information about the build process in
+":doc:`/overview-manual/development-environment`"
+chapter of the Yocto Project Overview and Concepts Manual.
+
+Fetching Code
+=============
+
+The first thing your recipe must do is specify how to fetch the source
+files. Fetching is controlled mainly through the
+:term:`SRC_URI` variable. Your recipe
+must have a :term:`SRC_URI` variable that points to where the source is
+located. For a graphical representation of source locations, see the
+":ref:`overview-manual/concepts:sources`" section in
+the Yocto Project Overview and Concepts Manual.
+
+The :ref:`ref-tasks-fetch` task uses the prefix of each entry in the
+:term:`SRC_URI` variable value to determine which
+:ref:`fetcher <bitbake-user-manual/bitbake-user-manual-fetching:fetchers>`
+to use to get your source files. It is the :term:`SRC_URI` variable that triggers
+the fetcher. The :ref:`ref-tasks-patch` task uses the variable after source is
+fetched to apply patches. The OpenEmbedded build system uses
+:term:`FILESOVERRIDES` for scanning directory locations for local files in
+:term:`SRC_URI`.
+
+The :term:`SRC_URI` variable in your recipe must define each unique location
+for your source files. It is good practice to not hard-code version
+numbers in a URL used in :term:`SRC_URI`. Rather than hard-code these
+values, use ``${``\ :term:`PV`\ ``}``,
+which causes the fetch process to use the version specified in the
+recipe filename. Specifying the version in this manner means that
+upgrading the recipe to a future version is as simple as renaming the
+recipe to match the new version.
+
+Here is a simple example from the
+``meta/recipes-devtools/strace/strace_5.5.bb`` recipe where the source
+comes from a single tarball. Notice the use of the
+:term:`PV` variable::
+
+ SRC_URI = "https://strace.io/files/${PV}/strace-${PV}.tar.xz \
+
+Files mentioned in :term:`SRC_URI` whose names end in a typical archive
+extension (e.g. ``.tar``, ``.tar.gz``, ``.tar.bz2``, ``.zip``, and so
+forth), are automatically extracted during the
+:ref:`ref-tasks-unpack` task. For
+another example that specifies these types of files, see the
+":ref:`dev-manual/new-recipe:building an autotooled package`" section.
+
+Another way of specifying source is from an SCM. For Git repositories,
+you must specify :term:`SRCREV` and you should specify :term:`PV` to include
+the revision with :term:`SRCPV`. Here is an example from the recipe
+``meta/recipes-core/musl/gcompat_git.bb``::
+
+ SRC_URI = "git://git.adelielinux.org/adelie/gcompat.git;protocol=https;branch=current"
+
+ PV = "1.0.0+1.1+git${SRCPV}"
+ SRCREV = "af5a49e489fdc04b9cf02547650d7aeaccd43793"
+
+If your :term:`SRC_URI` statement includes URLs pointing to individual files
+fetched from a remote server other than a version control system,
+BitBake attempts to verify the files against checksums defined in your
+recipe to ensure they have not been tampered with or otherwise modified
+since the recipe was written. Multiple checksums are supported:
+``SRC_URI[md5sum]``, ``SRC_URI[sha1sum]``, ``SRC_URI[sha256sum]``.
+``SRC_URI[sha384sum]`` and ``SRC_URI[sha512sum]``, but only
+``SRC_URI[sha256sum]`` is commonly used.
+
+.. note::
+
+ ``SRC_URI[md5sum]`` used to also be commonly used, but it is deprecated
+ and should be replaced by ``SRC_URI[sha256sum]`` when updating existing
+ recipes.
+
+If your :term:`SRC_URI` variable points to more than a single URL (excluding
+SCM URLs), you need to provide the ``sha256`` checksum for each URL. For these
+cases, you provide a name for each URL as part of the :term:`SRC_URI` and then
+reference that name in the subsequent checksum statements. Here is an example
+combining lines from the files ``git.inc`` and ``git_2.24.1.bb``::
+
+ SRC_URI = "${KERNELORG_MIRROR}/software/scm/git/git-${PV}.tar.gz;name=tarball \
+ ${KERNELORG_MIRROR}/software/scm/git/git-manpages-${PV}.tar.gz;name=manpages"
+
+ SRC_URI[tarball.sha256sum] = "ad5334956301c86841eb1e5b1bb20884a6bad89a10a6762c958220c7cf64da02"
+ SRC_URI[manpages.sha256sum] = "9a7ae3a093bea39770eb96ca3e5b40bff7af0b9f6123f089d7821d0e5b8e1230"
+
+The proper value for the ``sha256`` checksum might be available together
+with other signatures on the download page for the upstream source (e.g.
+``md5``, ``sha1``, ``sha256``, ``GPG``, and so forth). Because the
+OpenEmbedded build system typically only deals with ``sha256sum``,
+you should verify all the signatures you find by hand.
+
+If no :term:`SRC_URI` checksums are specified when you attempt to build the
+recipe, or you provide an incorrect checksum, the build will produce an
+error for each missing or incorrect checksum. As part of the error
+message, the build system provides the checksum string corresponding to
+the fetched file. Once you have the correct checksums, you can copy and
+paste them into your recipe and then run the build again to continue.
+
+.. note::
+
+ As mentioned, if the upstream source provides signatures for
+ verifying the downloaded source code, you should verify those
+ manually before setting the checksum values in the recipe and
+ continuing with the build.
+
+This final example is a bit more complicated and is from the
+``meta/recipes-sato/rxvt-unicode/rxvt-unicode_9.20.bb`` recipe. The
+example's :term:`SRC_URI` statement identifies multiple files as the source
+files for the recipe: a tarball, a patch file, a desktop file, and an icon::
+
+ SRC_URI = "http://dist.schmorp.de/rxvt-unicode/Attic/rxvt-unicode-${PV}.tar.bz2 \
+ file://xwc.patch \
+ file://rxvt.desktop \
+ file://rxvt.png"
+
+When you specify local files using the ``file://`` URI protocol, the
+build system fetches files from the local machine. The path is relative
+to the :term:`FILESPATH` variable
+and searches specific directories in a certain order:
+``${``\ :term:`BP`\ ``}``,
+``${``\ :term:`BPN`\ ``}``, and
+``files``. The directories are assumed to be subdirectories of the
+directory in which the recipe or append file resides. For another
+example that specifies these types of files, see the
+"`building a single .c file package`_" section.
+
+The previous example also specifies a patch file. Patch files are files
+whose names usually end in ``.patch`` or ``.diff`` but can end with
+compressed suffixes such as ``diff.gz`` and ``patch.bz2``, for example.
+The build system automatically applies patches as described in the
+":ref:`dev-manual/new-recipe:patching code`" section.
+
+Fetching Code Through Firewalls
+-------------------------------
+
+Some users are behind firewalls and need to fetch code through a proxy.
+See the ":doc:`/ref-manual/faq`" chapter for advice.
+
+Limiting the Number of Parallel Connections
+-------------------------------------------
+
+Some users are behind firewalls or use servers where the number of parallel
+connections is limited. In such cases, you can limit the number of fetch
+tasks being run in parallel by adding the following to your ``local.conf``
+file::
+
+ do_fetch[number_threads] = "4"
+
+Unpacking Code
+==============
+
+During the build, the
+:ref:`ref-tasks-unpack` task unpacks
+the source with ``${``\ :term:`S`\ ``}``
+pointing to where it is unpacked.
+
+If you are fetching your source files from an upstream source archived
+tarball and the tarball's internal structure matches the common
+convention of a top-level subdirectory named
+``${``\ :term:`BPN`\ ``}-${``\ :term:`PV`\ ``}``,
+then you do not need to set :term:`S`. However, if :term:`SRC_URI` specifies to
+fetch source from an archive that does not use this convention, or from
+an SCM like Git or Subversion, your recipe needs to define :term:`S`.
+
+If processing your recipe using BitBake successfully unpacks the source
+files, you need to be sure that the directory pointed to by ``${S}``
+matches the structure of the source.
+
+Patching Code
+=============
+
+Sometimes it is necessary to patch code after it has been fetched. Any
+files mentioned in :term:`SRC_URI` whose names end in ``.patch`` or
+``.diff`` or compressed versions of these suffixes (e.g. ``diff.gz``,
+``patch.bz2``, etc.) are treated as patches. The
+:ref:`ref-tasks-patch` task
+automatically applies these patches.
+
+The build system should be able to apply patches with the "-p1" option
+(i.e. one directory level in the path will be stripped off). If your
+patch needs to have more directory levels stripped off, specify the
+number of levels using the "striplevel" option in the :term:`SRC_URI` entry
+for the patch. Alternatively, if your patch needs to be applied in a
+specific subdirectory that is not specified in the patch file, use the
+"patchdir" option in the entry.
+
+As with all local files referenced in
+:term:`SRC_URI` using ``file://``,
+you should place patch files in a directory next to the recipe either
+named the same as the base name of the recipe
+(:term:`BP` and
+:term:`BPN`) or "files".
+
+Licensing
+=========
+
+Your recipe needs to define variables related to the license
+under whith the software is distributed. See the
+:ref:`contributor-guide/recipe-style-guide:recipe license fields`
+section in the Contributor Guide for details.
+
+Dependencies
+============
+
+Most software packages have a short list of other packages that they
+require, which are called dependencies. These dependencies fall into two
+main categories: build-time dependencies, which are required when the
+software is built; and runtime dependencies, which are required to be
+installed on the target in order for the software to run.
+
+Within a recipe, you specify build-time dependencies using the
+:term:`DEPENDS` variable. Although there are nuances,
+items specified in :term:`DEPENDS` should be names of other
+recipes. It is important that you specify all build-time dependencies
+explicitly.
+
+Another consideration is that configure scripts might automatically
+check for optional dependencies and enable corresponding functionality
+if those dependencies are found. If you wish to make a recipe that is
+more generally useful (e.g. publish the recipe in a layer for others to
+use), instead of hard-disabling the functionality, you can use the
+:term:`PACKAGECONFIG` variable to allow functionality and the
+corresponding dependencies to be enabled and disabled easily by other
+users of the recipe.
+
+Similar to build-time dependencies, you specify runtime dependencies
+through a variable -
+:term:`RDEPENDS`, which is
+package-specific. All variables that are package-specific need to have
+the name of the package added to the end as an override. Since the main
+package for a recipe has the same name as the recipe, and the recipe's
+name can be found through the
+``${``\ :term:`PN`\ ``}`` variable, then
+you specify the dependencies for the main package by setting
+``RDEPENDS:${PN}``. If the package were named ``${PN}-tools``, then you
+would set ``RDEPENDS:${PN}-tools``, and so forth.
+
+Some runtime dependencies will be set automatically at packaging time.
+These dependencies include any shared library dependencies (i.e. if a
+package "example" contains "libexample" and another package "mypackage"
+contains a binary that links to "libexample" then the OpenEmbedded build
+system will automatically add a runtime dependency to "mypackage" on
+"example"). See the
+":ref:`overview-manual/concepts:automatically added runtime dependencies`"
+section in the Yocto Project Overview and Concepts Manual for further
+details.
+
+Configuring the Recipe
+======================
+
+Most software provides some means of setting build-time configuration
+options before compilation. Typically, setting these options is
+accomplished by running a configure script with options, or by modifying
+a build configuration file.
+
+.. note::
+
+ As of Yocto Project Release 1.7, some of the core recipes that
+ package binary configuration scripts now disable the scripts due to
+ the scripts previously requiring error-prone path substitution. The
+ OpenEmbedded build system uses ``pkg-config`` now, which is much more
+ robust. You can find a list of the ``*-config`` scripts that are disabled
+ in the ":ref:`migration-1.7-binary-configuration-scripts-disabled`" section
+ in the Yocto Project Reference Manual.
+
+A major part of build-time configuration is about checking for
+build-time dependencies and possibly enabling optional functionality as
+a result. You need to specify any build-time dependencies for the
+software you are building in your recipe's
+:term:`DEPENDS` value, in terms of
+other recipes that satisfy those dependencies. You can often find
+build-time or runtime dependencies described in the software's
+documentation.
+
+The following list provides configuration items of note based on how
+your software is built:
+
+- *Autotools:* If your source files have a ``configure.ac`` file, then
+ your software is built using Autotools. If this is the case, you just
+ need to modify the configuration.
+
+ When using Autotools, your recipe needs to inherit the
+ :ref:`ref-classes-autotools` class and it does not have to
+ contain a :ref:`ref-tasks-configure` task. However, you might still want to
+ make some adjustments. For example, you can set :term:`EXTRA_OECONF` or
+ :term:`PACKAGECONFIG_CONFARGS` to pass any needed configure options that
+ are specific to the recipe.
+
+- *CMake:* If your source files have a ``CMakeLists.txt`` file, then
+ your software is built using CMake. If this is the case, you just
+ need to modify the configuration.
+
+ When you use CMake, your recipe needs to inherit the
+ :ref:`ref-classes-cmake` class and it does not have to contain a
+ :ref:`ref-tasks-configure` task. You can make some adjustments by setting
+ :term:`EXTRA_OECMAKE` to pass any needed configure options that are
+ specific to the recipe.
+
+ .. note::
+
+ If you need to install one or more custom CMake toolchain files
+ that are supplied by the application you are building, install the
+ files to ``${D}${datadir}/cmake/Modules`` during :ref:`ref-tasks-install`.
+
+- *Other:* If your source files do not have a ``configure.ac`` or
+ ``CMakeLists.txt`` file, then your software is built using some
+ method other than Autotools or CMake. If this is the case, you
+ normally need to provide a
+ :ref:`ref-tasks-configure` task
+ in your recipe unless, of course, there is nothing to configure.
+
+ Even if your software is not being built by Autotools or CMake, you
+ still might not need to deal with any configuration issues. You need
+ to determine if configuration is even a required step. You might need
+ to modify a Makefile or some configuration file used for the build to
+ specify necessary build options. Or, perhaps you might need to run a
+ provided, custom configure script with the appropriate options.
+
+ For the case involving a custom configure script, you would run
+ ``./configure --help`` and look for the options you need to set.
+
+Once configuration succeeds, it is always good practice to look at the
+``log.do_configure`` file to ensure that the appropriate options have
+been enabled and no additional build-time dependencies need to be added
+to :term:`DEPENDS`. For example, if the configure script reports that it
+found something not mentioned in :term:`DEPENDS`, or that it did not find
+something that it needed for some desired optional functionality, then
+you would need to add those to :term:`DEPENDS`. Looking at the log might
+also reveal items being checked for, enabled, or both that you do not
+want, or items not being found that are in :term:`DEPENDS`, in which case
+you would need to look at passing extra options to the configure script
+as needed. For reference information on configure options specific to
+the software you are building, you can consult the output of the
+``./configure --help`` command within ``${S}`` or consult the software's
+upstream documentation.
+
+Using Headers to Interface with Devices
+=======================================
+
+If your recipe builds an application that needs to communicate with some
+device or needs an API into a custom kernel, you will need to provide
+appropriate header files. Under no circumstances should you ever modify
+the existing
+``meta/recipes-kernel/linux-libc-headers/linux-libc-headers.inc`` file.
+These headers are used to build ``libc`` and must not be compromised
+with custom or machine-specific header information. If you customize
+``libc`` through modified headers all other applications that use
+``libc`` thus become affected.
+
+.. note::
+
+ Never copy and customize the ``libc`` header file (i.e.
+ ``meta/recipes-kernel/linux-libc-headers/linux-libc-headers.inc``).
+
+The correct way to interface to a device or custom kernel is to use a
+separate package that provides the additional headers for the driver or
+other unique interfaces. When doing so, your application also becomes
+responsible for creating a dependency on that specific provider.
+
+Consider the following:
+
+- Never modify ``linux-libc-headers.inc``. Consider that file to be
+ part of the ``libc`` system, and not something you use to access the
+ kernel directly. You should access ``libc`` through specific ``libc``
+ calls.
+
+- Applications that must talk directly to devices should either provide
+ necessary headers themselves, or establish a dependency on a special
+ headers package that is specific to that driver.
+
+For example, suppose you want to modify an existing header that adds I/O
+control or network support. If the modifications are used by a small
+number programs, providing a unique version of a header is easy and has
+little impact. When doing so, bear in mind the guidelines in the
+previous list.
+
+.. note::
+
+ If for some reason your changes need to modify the behavior of the ``libc``,
+ and subsequently all other applications on the system, use a ``.bbappend``
+ to modify the ``linux-kernel-headers.inc`` file. However, take care to not
+ make the changes machine specific.
+
+Consider a case where your kernel is older and you need an older
+``libc`` ABI. The headers installed by your recipe should still be a
+standard mainline kernel, not your own custom one.
+
+When you use custom kernel headers you need to get them from
+:term:`STAGING_KERNEL_DIR`,
+which is the directory with kernel headers that are required to build
+out-of-tree modules. Your recipe will also need the following::
+
+ do_configure[depends] += "virtual/kernel:do_shared_workdir"
+
+Compilation
+===========
+
+During a build, the :ref:`ref-tasks-compile` task happens after source is fetched,
+unpacked, and configured. If the recipe passes through :ref:`ref-tasks-compile`
+successfully, nothing needs to be done.
+
+However, if the compile step fails, you need to diagnose the failure.
+Here are some common issues that cause failures.
+
+.. note::
+
+ For cases where improper paths are detected for configuration files
+ or for when libraries/headers cannot be found, be sure you are using
+ the more robust ``pkg-config``. See the note in section
+ ":ref:`dev-manual/new-recipe:Configuring the Recipe`" for additional information.
+
+- *Parallel build failures:* These failures manifest themselves as
+ intermittent errors, or errors reporting that a file or directory
+ that should be created by some other part of the build process could
+ not be found. This type of failure can occur even if, upon
+ inspection, the file or directory does exist after the build has
+ failed, because that part of the build process happened in the wrong
+ order.
+
+ To fix the problem, you need to either satisfy the missing dependency
+ in the Makefile or whatever script produced the Makefile, or (as a
+ workaround) set :term:`PARALLEL_MAKE` to an empty string::
+
+ PARALLEL_MAKE = ""
+
+ For information on parallel Makefile issues, see the
+ ":ref:`dev-manual/debugging:debugging parallel make races`" section.
+
+- *Improper host path usage:* This failure applies to recipes building
+ for the target or ":ref:`ref-classes-nativesdk`" only. The
+ failure occurs when the compilation process uses improper headers,
+ libraries, or other files from the host system when cross-compiling for
+ the target.
+
+ To fix the problem, examine the ``log.do_compile`` file to identify
+ the host paths being used (e.g. ``/usr/include``, ``/usr/lib``, and
+ so forth) and then either add configure options, apply a patch, or do
+ both.
+
+- *Failure to find required libraries/headers:* If a build-time
+ dependency is missing because it has not been declared in
+ :term:`DEPENDS`, or because the
+ dependency exists but the path used by the build process to find the
+ file is incorrect and the configure step did not detect it, the
+ compilation process could fail. For either of these failures, the
+ compilation process notes that files could not be found. In these
+ cases, you need to go back and add additional options to the
+ configure script as well as possibly add additional build-time
+ dependencies to :term:`DEPENDS`.
+
+ Occasionally, it is necessary to apply a patch to the source to
+ ensure the correct paths are used. If you need to specify paths to
+ find files staged into the sysroot from other recipes, use the
+ variables that the OpenEmbedded build system provides (e.g.
+ :term:`STAGING_BINDIR`, :term:`STAGING_INCDIR`, :term:`STAGING_DATADIR`, and so
+ forth).
+
+Installing
+==========
+
+During :ref:`ref-tasks-install`, the task copies the built files along with their
+hierarchy to locations that would mirror their locations on the target
+device. The installation process copies files from the
+``${``\ :term:`S`\ ``}``,
+``${``\ :term:`B`\ ``}``, and
+``${``\ :term:`WORKDIR`\ ``}``
+directories to the ``${``\ :term:`D`\ ``}``
+directory to create the structure as it should appear on the target
+system.
+
+How your software is built affects what you must do to be sure your
+software is installed correctly. The following list describes what you
+must do for installation depending on the type of build system used by
+the software being built:
+
+- *Autotools and CMake:* If the software your recipe is building uses
+ Autotools or CMake, the OpenEmbedded build system understands how to
+ install the software. Consequently, you do not have to have a
+ :ref:`ref-tasks-install` task as part of your recipe. You just need to make
+ sure the install portion of the build completes with no issues.
+ However, if you wish to install additional files not already being
+ installed by ``make install``, you should do this using a
+ ``do_install:append`` function using the install command as described
+ in the "Manual" bulleted item later in this list.
+
+- *Other (using* ``make install``\ *)*: You need to define a :ref:`ref-tasks-install`
+ function in your recipe. The function should call
+ ``oe_runmake install`` and will likely need to pass in the
+ destination directory as well. How you pass that path is dependent on
+ how the ``Makefile`` being run is written (e.g. ``DESTDIR=${D}``,
+ ``PREFIX=${D}``, ``INSTALLROOT=${D}``, and so forth).
+
+ For an example recipe using ``make install``, see the
+ ":ref:`dev-manual/new-recipe:building a makefile-based package`" section.
+
+- *Manual:* You need to define a :ref:`ref-tasks-install` function in your
+ recipe. The function must first use ``install -d`` to create the
+ directories under
+ ``${``\ :term:`D`\ ``}``. Once the
+ directories exist, your function can use ``install`` to manually
+ install the built software into the directories.
+
+ You can find more information on ``install`` at
+ https://www.gnu.org/software/coreutils/manual/html_node/install-invocation.html.
+
+For the scenarios that do not use Autotools or CMake, you need to track
+the installation and diagnose and fix any issues until everything
+installs correctly. You need to look in the default location of
+``${D}``, which is ``${WORKDIR}/image``, to be sure your files have been
+installed correctly.
+
+.. note::
+
+ - During the installation process, you might need to modify some of
+ the installed files to suit the target layout. For example, you
+ might need to replace hard-coded paths in an initscript with
+ values of variables provided by the build system, such as
+ replacing ``/usr/bin/`` with ``${bindir}``. If you do perform such
+ modifications during :ref:`ref-tasks-install`, be sure to modify the
+ destination file after copying rather than before copying.
+ Modifying after copying ensures that the build system can
+ re-execute :ref:`ref-tasks-install` if needed.
+
+ - ``oe_runmake install``, which can be run directly or can be run
+ indirectly by the :ref:`ref-classes-autotools` and
+ :ref:`ref-classes-cmake` classes, runs ``make install`` in parallel.
+ Sometimes, a Makefile can have missing dependencies between targets that
+ can result in race conditions. If you experience intermittent failures
+ during :ref:`ref-tasks-install`, you might be able to work around them by
+ disabling parallel Makefile installs by adding the following to the
+ recipe::
+
+ PARALLEL_MAKEINST = ""
+
+ See :term:`PARALLEL_MAKEINST` for additional information.
+
+ - If you need to install one or more custom CMake toolchain files
+ that are supplied by the application you are building, install the
+ files to ``${D}${datadir}/cmake/Modules`` during
+ :ref:`ref-tasks-install`.
+
+Enabling System Services
+========================
+
+If you want to install a service, which is a process that usually starts
+on boot and runs in the background, then you must include some
+additional definitions in your recipe.
+
+If you are adding services and the service initialization script or the
+service file itself is not installed, you must provide for that
+installation in your recipe using a ``do_install:append`` function. If
+your recipe already has a :ref:`ref-tasks-install` function, update the function
+near its end rather than adding an additional ``do_install:append``
+function.
+
+When you create the installation for your services, you need to
+accomplish what is normally done by ``make install``. In other words,
+make sure your installation arranges the output similar to how it is
+arranged on the target system.
+
+The OpenEmbedded build system provides support for starting services two
+different ways:
+
+- *SysVinit:* SysVinit is a system and service manager that manages the
+ init system used to control the very basic functions of your system.
+ The init program is the first program started by the Linux kernel
+ when the system boots. Init then controls the startup, running and
+ shutdown of all other programs.
+
+ To enable a service using SysVinit, your recipe needs to inherit the
+ :ref:`ref-classes-update-rc.d` class. The class helps
+ facilitate safely installing the package on the target.
+
+ You will need to set the
+ :term:`INITSCRIPT_PACKAGES`,
+ :term:`INITSCRIPT_NAME`,
+ and
+ :term:`INITSCRIPT_PARAMS`
+ variables within your recipe.
+
+- *systemd:* System Management Daemon (systemd) was designed to replace
+ SysVinit and to provide enhanced management of services. For more
+ information on systemd, see the systemd homepage at
+ https://freedesktop.org/wiki/Software/systemd/.
+
+ To enable a service using systemd, your recipe needs to inherit the
+ :ref:`ref-classes-systemd` class. See the ``systemd.bbclass`` file
+ located in your :term:`Source Directory` section for more information.
+
+Packaging
+=========
+
+Successful packaging is a combination of automated processes performed
+by the OpenEmbedded build system and some specific steps you need to
+take. The following list describes the process:
+
+- *Splitting Files*: The :ref:`ref-tasks-package` task splits the files produced
+ by the recipe into logical components. Even software that produces a
+ single binary might still have debug symbols, documentation, and
+ other logical components that should be split out. The :ref:`ref-tasks-package`
+ task ensures that files are split up and packaged correctly.
+
+- *Running QA Checks*: The :ref:`ref-classes-insane` class adds a
+ step to the package generation process so that output quality
+ assurance checks are generated by the OpenEmbedded build system. This
+ step performs a range of checks to be sure the build's output is free
+ of common problems that show up during runtime. For information on
+ these checks, see the :ref:`ref-classes-insane` class and
+ the ":ref:`ref-manual/qa-checks:qa error and warning messages`"
+ chapter in the Yocto Project Reference Manual.
+
+- *Hand-Checking Your Packages*: After you build your software, you
+ need to be sure your packages are correct. Examine the
+ ``${``\ :term:`WORKDIR`\ ``}/packages-split``
+ directory and make sure files are where you expect them to be. If you
+ discover problems, you can set
+ :term:`PACKAGES`,
+ :term:`FILES`,
+ ``do_install(:append)``, and so forth as needed.
+
+- *Splitting an Application into Multiple Packages*: If you need to
+ split an application into several packages, see the
+ ":ref:`dev-manual/new-recipe:splitting an application into multiple packages`"
+ section for an example.
+
+- *Installing a Post-Installation Script*: For an example showing how
+ to install a post-installation script, see the
+ ":ref:`dev-manual/new-recipe:post-installation scripts`" section.
+
+- *Marking Package Architecture*: Depending on what your recipe is
+ building and how it is configured, it might be important to mark the
+ packages produced as being specific to a particular machine, or to
+ mark them as not being specific to a particular machine or
+ architecture at all.
+
+ By default, packages apply to any machine with the same architecture
+ as the target machine. When a recipe produces packages that are
+ machine-specific (e.g. the
+ :term:`MACHINE` value is passed
+ into the configure script or a patch is applied only for a particular
+ machine), you should mark them as such by adding the following to the
+ recipe::
+
+ PACKAGE_ARCH = "${MACHINE_ARCH}"
+
+ On the other hand, if the recipe produces packages that do not
+ contain anything specific to the target machine or architecture at
+ all (e.g. recipes that simply package script files or configuration
+ files), you should use the :ref:`ref-classes-allarch` class to
+ do this for you by adding this to your recipe::
+
+ inherit allarch
+
+ Ensuring that the package architecture is correct is not critical
+ while you are doing the first few builds of your recipe. However, it
+ is important in order to ensure that your recipe rebuilds (or does
+ not rebuild) appropriately in response to changes in configuration,
+ and to ensure that you get the appropriate packages installed on the
+ target machine, particularly if you run separate builds for more than
+ one target machine.
+
+Sharing Files Between Recipes
+=============================
+
+Recipes often need to use files provided by other recipes on the build
+host. For example, an application linking to a common library needs
+access to the library itself and its associated headers. The way this
+access is accomplished is by populating a sysroot with files. Each
+recipe has two sysroots in its work directory, one for target files
+(``recipe-sysroot``) and one for files that are native to the build host
+(``recipe-sysroot-native``).
+
+.. note::
+
+ You could find the term "staging" used within the Yocto project
+ regarding files populating sysroots (e.g. the :term:`STAGING_DIR`
+ variable).
+
+Recipes should never populate the sysroot directly (i.e. write files
+into sysroot). Instead, files should be installed into standard
+locations during the
+:ref:`ref-tasks-install` task within
+the ``${``\ :term:`D`\ ``}`` directory. The
+reason for this limitation is that almost all files that populate the
+sysroot are cataloged in manifests in order to ensure the files can be
+removed later when a recipe is either modified or removed. Thus, the
+sysroot is able to remain free from stale files.
+
+A subset of the files installed by the :ref:`ref-tasks-install` task are
+used by the :ref:`ref-tasks-populate_sysroot` task as defined by the
+:term:`SYSROOT_DIRS` variable to automatically populate the sysroot. It
+is possible to modify the list of directories that populate the sysroot.
+The following example shows how you could add the ``/opt`` directory to
+the list of directories within a recipe::
+
+ SYSROOT_DIRS += "/opt"
+
+.. note::
+
+ The `/sysroot-only` is to be used by recipes that generate artifacts
+ that are not included in the target filesystem, allowing them to share
+ these artifacts without needing to use the :term:`DEPLOY_DIR`.
+
+For a more complete description of the :ref:`ref-tasks-populate_sysroot`
+task and its associated functions, see the
+:ref:`staging <ref-classes-staging>` class.
+
+Using Virtual Providers
+=======================
+
+Prior to a build, if you know that several different recipes provide the
+same functionality, you can use a virtual provider (i.e. ``virtual/*``)
+as a placeholder for the actual provider. The actual provider is
+determined at build-time.
+
+A common scenario where a virtual provider is used would be for the kernel
+recipe. Suppose you have three kernel recipes whose :term:`PN` values map to
+``kernel-big``, ``kernel-mid``, and ``kernel-small``. Furthermore, each of
+these recipes in some way uses a :term:`PROVIDES` statement that essentially
+identifies itself as being able to provide ``virtual/kernel``. Here is one way
+through the :ref:`ref-classes-kernel` class::
+
+ PROVIDES += "virtual/kernel"
+
+Any recipe that inherits the :ref:`ref-classes-kernel` class is
+going to utilize a :term:`PROVIDES` statement that identifies that recipe as
+being able to provide the ``virtual/kernel`` item.
+
+Now comes the time to actually build an image and you need a kernel
+recipe, but which one? You can configure your build to call out the
+kernel recipe you want by using the :term:`PREFERRED_PROVIDER` variable. As
+an example, consider the :yocto_git:`x86-base.inc
+</poky/tree/meta/conf/machine/include/x86/x86-base.inc>` include file, which is a
+machine (i.e. :term:`MACHINE`) configuration file. This include file is the
+reason all x86-based machines use the ``linux-yocto`` kernel. Here are the
+relevant lines from the include file::
+
+ PREFERRED_PROVIDER_virtual/kernel ??= "linux-yocto"
+ PREFERRED_VERSION_linux-yocto ??= "4.15%"
+
+When you use a virtual provider, you do not have to "hard code" a recipe
+name as a build dependency. You can use the
+:term:`DEPENDS` variable to state the
+build is dependent on ``virtual/kernel`` for example::
+
+ DEPENDS = "virtual/kernel"
+
+During the build, the OpenEmbedded build system picks
+the correct recipe needed for the ``virtual/kernel`` dependency based on
+the :term:`PREFERRED_PROVIDER` variable. If you want to use the small kernel
+mentioned at the beginning of this section, configure your build as
+follows::
+
+ PREFERRED_PROVIDER_virtual/kernel ??= "kernel-small"
+
+.. note::
+
+ Any recipe that :term:`PROVIDES` a ``virtual/*`` item that is ultimately not
+ selected through :term:`PREFERRED_PROVIDER` does not get built. Preventing these
+ recipes from building is usually the desired behavior since this mechanism's
+ purpose is to select between mutually exclusive alternative providers.
+
+The following lists specific examples of virtual providers:
+
+- ``virtual/kernel``: Provides the name of the kernel recipe to use
+ when building a kernel image.
+
+- ``virtual/bootloader``: Provides the name of the bootloader to use
+ when building an image.
+
+- ``virtual/libgbm``: Provides ``gbm.pc``.
+
+- ``virtual/egl``: Provides ``egl.pc`` and possibly ``wayland-egl.pc``.
+
+- ``virtual/libgl``: Provides ``gl.pc`` (i.e. libGL).
+
+- ``virtual/libgles1``: Provides ``glesv1_cm.pc`` (i.e. libGLESv1_CM).
+
+- ``virtual/libgles2``: Provides ``glesv2.pc`` (i.e. libGLESv2).
+
+.. note::
+
+ Virtual providers only apply to build time dependencies specified with
+ :term:`PROVIDES` and :term:`DEPENDS`. They do not apply to runtime
+ dependencies specified with :term:`RPROVIDES` and :term:`RDEPENDS`.
+
+Properly Versioning Pre-Release Recipes
+=======================================
+
+Sometimes the name of a recipe can lead to versioning problems when the
+recipe is upgraded to a final release. For example, consider the
+``irssi_0.8.16-rc1.bb`` recipe file in the list of example recipes in
+the ":ref:`dev-manual/new-recipe:storing and naming the recipe`" section.
+This recipe is at a release candidate stage (i.e. "rc1"). When the recipe is
+released, the recipe filename becomes ``irssi_0.8.16.bb``. The version
+change from ``0.8.16-rc1`` to ``0.8.16`` is seen as a decrease by the
+build system and package managers, so the resulting packages will not
+correctly trigger an upgrade.
+
+In order to ensure the versions compare properly, the recommended
+convention is to use a tilde (``~``) character as follows::
+
+ PV = 0.8.16~rc1
+
+This way ``0.8.16~rc1`` sorts before ``0.8.16``. See the
+":ref:`contributor-guide/recipe-style-guide:version policy`" section in the
+Yocto Project and OpenEmbedded Contributor Guide for more details about
+versioning code corresponding to a pre-release or to a specific Git commit.
+
+Post-Installation Scripts
+=========================
+
+Post-installation scripts run immediately after installing a package on
+the target or during image creation when a package is included in an
+image. To add a post-installation script to a package, add a
+``pkg_postinst:``\ `PACKAGENAME`\ ``()`` function to the recipe file
+(``.bb``) and replace `PACKAGENAME` with the name of the package you want
+to attach to the ``postinst`` script. To apply the post-installation
+script to the main package for the recipe, which is usually what is
+required, specify
+``${``\ :term:`PN`\ ``}`` in place of
+PACKAGENAME.
+
+A post-installation function has the following structure::
+
+ pkg_postinst:PACKAGENAME() {
+ # Commands to carry out
+ }
+
+The script defined in the post-installation function is called when the
+root filesystem is created. If the script succeeds, the package is
+marked as installed.
+
+.. note::
+
+ Any RPM post-installation script that runs on the target should
+ return a 0 exit code. RPM does not allow non-zero exit codes for
+ these scripts, and the RPM package manager will cause the package to
+ fail installation on the target.
+
+Sometimes it is necessary for the execution of a post-installation
+script to be delayed until the first boot. For example, the script might
+need to be executed on the device itself. To delay script execution
+until boot time, you must explicitly mark post installs to defer to the
+target. You can use ``pkg_postinst_ontarget()`` or call
+``postinst_intercept delay_to_first_boot`` from ``pkg_postinst()``. Any
+failure of a ``pkg_postinst()`` script (including exit 1) triggers an
+error during the
+:ref:`ref-tasks-rootfs` task.
+
+If you have recipes that use ``pkg_postinst`` function and they require
+the use of non-standard native tools that have dependencies during
+root filesystem construction, you need to use the
+:term:`PACKAGE_WRITE_DEPS`
+variable in your recipe to list these tools. If you do not use this
+variable, the tools might be missing and execution of the
+post-installation script is deferred until first boot. Deferring the
+script to the first boot is undesirable and impossible for read-only
+root filesystems.
+
+.. note::
+
+ There is equivalent support for pre-install, pre-uninstall, and post-uninstall
+ scripts by way of ``pkg_preinst``, ``pkg_prerm``, and ``pkg_postrm``,
+ respectively. These scrips work in exactly the same way as does
+ ``pkg_postinst`` with the exception that they run at different times. Also,
+ because of when they run, they are not applicable to being run at image
+ creation time like ``pkg_postinst``.
+
+Testing
+=======
+
+The final step for completing your recipe is to be sure that the
+software you built runs correctly. To accomplish runtime testing, add
+the build's output packages to your image and test them on the target.
+
+For information on how to customize your image by adding specific
+packages, see ":ref:`dev-manual/customizing-images:customizing images`" section.
+
+Examples
+========
+
+To help summarize how to write a recipe, this section provides some
+recipe examples given various scenarios:
+
+- `Building a single .c file package`_
+
+- `Building a Makefile-based package`_
+
+- `Building an Autotooled package`_
+
+- `Building a Meson package`_
+
+- `Splitting an application into multiple packages`_
+
+- `Packaging externally produced binaries`_
+
+Building a Single .c File Package
+---------------------------------
+
+Building an application from a single file that is stored locally (e.g. under
+``files``) requires a recipe that has the file listed in the :term:`SRC_URI`
+variable. Additionally, you need to manually write the :ref:`ref-tasks-compile`
+and :ref:`ref-tasks-install` tasks. The :term:`S` variable defines the
+directory containing the source code, which is set to :term:`WORKDIR` in this
+case --- the directory BitBake uses for the build::
+
+ SUMMARY = "Simple helloworld application"
+ SECTION = "examples"
+ LICENSE = "MIT"
+ LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
+
+ SRC_URI = "file://helloworld.c"
+
+ S = "${WORKDIR}"
+
+ do_compile() {
+ ${CC} ${LDFLAGS} helloworld.c -o helloworld
+ }
+
+ do_install() {
+ install -d ${D}${bindir}
+ install -m 0755 helloworld ${D}${bindir}
+ }
+
+By default, the ``helloworld``, ``helloworld-dbg``, and ``helloworld-dev`` packages
+are built. For information on how to customize the packaging process, see the
+":ref:`dev-manual/new-recipe:splitting an application into multiple packages`"
+section.
+
+Building a Makefile-Based Package
+---------------------------------
+
+Applications built with GNU ``make`` require a recipe that has the source archive
+listed in :term:`SRC_URI`. You do not need to add a :ref:`ref-tasks-compile`
+step since by default BitBake starts the ``make`` command to compile the
+application. If you need additional ``make`` options, you should store them in
+the :term:`EXTRA_OEMAKE` or :term:`PACKAGECONFIG_CONFARGS` variables. BitBake
+passes these options into the GNU ``make`` invocation. Note that a
+:ref:`ref-tasks-install` task is still required. Otherwise, BitBake runs an
+empty :ref:`ref-tasks-install` task by default.
+
+Some applications might require extra parameters to be passed to the
+compiler. For example, the application might need an additional header
+path. You can accomplish this by adding to the :term:`CFLAGS` variable. The
+following example shows this::
+
+ CFLAGS:prepend = "-I ${S}/include "
+
+In the following example, ``lz4`` is a makefile-based package::
+
+ SUMMARY = "Extremely Fast Compression algorithm"
+ DESCRIPTION = "LZ4 is a very fast lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems."
+ HOMEPAGE = "https://github.com/lz4/lz4"
+
+ LICENSE = "BSD-2-Clause | GPL-2.0-only"
+ LIC_FILES_CHKSUM = "file://lib/LICENSE;md5=ebc2ea4814a64de7708f1571904b32cc \
+ file://programs/COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+ file://LICENSE;md5=d57c0d21cb917fb4e0af2454aa48b956 \
+ "
+
+ PE = "1"
+
+ SRCREV = "d44371841a2f1728a3f36839fd4b7e872d0927d3"
+
+ SRC_URI = "git://github.com/lz4/lz4.git;branch=release;protocol=https \
+ file://CVE-2021-3520.patch \
+ "
+ UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>.*)"
+
+ S = "${WORKDIR}/git"
+
+ CVE_STATUS[CVE-2014-4715] = "fixed-version: Fixed in r118, which is larger than the current version"
+
+ EXTRA_OEMAKE = "PREFIX=${prefix} CC='${CC}' CFLAGS='${CFLAGS}' DESTDIR=${D} LIBDIR=${libdir} INCLUDEDIR=${includedir} BUILD_STATIC=no"
+
+ do_install() {
+ oe_runmake install
+ }
+
+ BBCLASSEXTEND = "native nativesdk"
+
+Building an Autotooled Package
+------------------------------
+
+Applications built with the Autotools such as ``autoconf`` and ``automake``
+require a recipe that has a source archive listed in :term:`SRC_URI` and also
+inherit the :ref:`ref-classes-autotools` class, which contains the definitions
+of all the steps needed to build an Autotool-based application. The result of
+the build is automatically packaged. And, if the application uses NLS for
+localization, packages with local information are generated (one package per
+language). Here is one example: (``hello_2.3.bb``)::
+
+ SUMMARY = "GNU Helloworld application"
+ SECTION = "examples"
+ LICENSE = "GPL-2.0-or-later"
+ LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
+
+ SRC_URI = "${GNU_MIRROR}/hello/hello-${PV}.tar.gz"
+
+ inherit autotools gettext
+
+The variable :term:`LIC_FILES_CHKSUM` is used to track source license changes
+as described in the ":ref:`dev-manual/licenses:tracking license changes`"
+section in the Yocto Project Overview and Concepts Manual. You can quickly
+create Autotool-based recipes in a manner similar to the previous example.
+
+.. _ref-building-meson-package:
+
+Building a Meson Package
+------------------------
+
+Applications built with the `Meson build system <https://mesonbuild.com/>`__
+just need a recipe that has sources described in :term:`SRC_URI` and inherits
+the :ref:`ref-classes-meson` class.
+
+The :oe_git:`ipcalc recipe </meta-openembedded/tree/meta-networking/recipes-support/ipcalc>`
+is a simple example of an application without dependencies::
+
+ SUMMARY = "Tool to assist in network address calculations for IPv4 and IPv6."
+ HOMEPAGE = "https://gitlab.com/ipcalc/ipcalc"
+
+ SECTION = "net"
+
+ LICENSE = "GPL-2.0-only"
+ LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+
+ SRC_URI = "git://gitlab.com/ipcalc/ipcalc.git;protocol=https;branch=master"
+ SRCREV = "4c4261a47f355946ee74013d4f5d0494487cc2d6"
+
+ S = "${WORKDIR}/git"
+
+ inherit meson
+
+Applications with dependencies are likely to inherit the
+:ref:`ref-classes-pkgconfig` class, as ``pkg-config`` is the default method
+used by Meson to find dependencies and compile applications against them.
+
+Splitting an Application into Multiple Packages
+-----------------------------------------------
+
+You can use the variables :term:`PACKAGES` and :term:`FILES` to split an
+application into multiple packages.
+
+Here is an example that uses the ``libxpm`` recipe. By default,
+this recipe generates a single package that contains the library along
+with a few binaries. You can modify the recipe to split the binaries
+into separate packages::
+
+ require xorg-lib-common.inc
+
+ SUMMARY = "Xpm: X Pixmap extension library"
+ LICENSE = "MIT"
+ LIC_FILES_CHKSUM = "file://COPYING;md5=51f4270b012ecd4ab1a164f5f4ed6cf7"
+ DEPENDS += "libxext libsm libxt"
+ PE = "1"
+
+ XORG_PN = "libXpm"
+
+ PACKAGES =+ "sxpm cxpm"
+ FILES:cxpm = "${bindir}/cxpm"
+ FILES:sxpm = "${bindir}/sxpm"
+
+In the previous example, we want to ship the ``sxpm`` and ``cxpm``
+binaries in separate packages. Since ``bindir`` would be packaged into
+the main :term:`PN` package by default, we prepend the :term:`PACKAGES` variable
+so additional package names are added to the start of list. This results
+in the extra ``FILES:*`` variables then containing information that
+define which files and directories go into which packages. Files
+included by earlier packages are skipped by latter packages. Thus, the
+main :term:`PN` package does not include the above listed files.
+
+Packaging Externally Produced Binaries
+--------------------------------------
+
+Sometimes, you need to add pre-compiled binaries to an image. For
+example, suppose that there are binaries for proprietary code,
+created by a particular division of a company. Your part of the company
+needs to use those binaries as part of an image that you are building
+using the OpenEmbedded build system. Since you only have the binaries
+and not the source code, you cannot use a typical recipe that expects to
+fetch the source specified in
+:term:`SRC_URI` and then compile it.
+
+One method is to package the binaries and then install them as part of
+the image. Generally, it is not a good idea to package binaries since,
+among other things, it can hinder the ability to reproduce builds and
+could lead to compatibility problems with ABI in the future. However,
+sometimes you have no choice.
+
+The easiest solution is to create a recipe that uses the
+:ref:`ref-classes-bin-package` class and to be sure that you are using default
+locations for build artifacts. In most cases, the
+:ref:`ref-classes-bin-package` class handles "skipping" the configure and
+compile steps as well as sets things up to grab packages from the appropriate
+area. In particular, this class sets ``noexec`` on both the
+:ref:`ref-tasks-configure` and :ref:`ref-tasks-compile` tasks, sets
+``FILES:${PN}`` to "/" so that it picks up all files, and sets up a
+:ref:`ref-tasks-install` task, which effectively copies all files from ``${S}``
+to ``${D}``. The :ref:`ref-classes-bin-package` class works well when the files
+extracted into ``${S}`` are already laid out in the way they should be laid out
+on the target. For more information on these variables, see the :term:`FILES`,
+:term:`PN`, :term:`S`, and :term:`D` variables in the Yocto Project Reference
+Manual's variable glossary.
+
+.. note::
+
+ - Using :term:`DEPENDS` is a good
+ idea even for components distributed in binary form, and is often
+ necessary for shared libraries. For a shared library, listing the
+ library dependencies in :term:`DEPENDS` makes sure that the libraries
+ are available in the staging sysroot when other recipes link
+ against the library, which might be necessary for successful
+ linking.
+
+ - Using :term:`DEPENDS` also allows runtime dependencies between
+ packages to be added automatically. See the
+ ":ref:`overview-manual/concepts:automatically added runtime dependencies`"
+ section in the Yocto Project Overview and Concepts Manual for more
+ information.
+
+If you cannot use the :ref:`ref-classes-bin-package` class, you need to be sure you are
+doing the following:
+
+- Create a recipe where the
+ :ref:`ref-tasks-configure` and
+ :ref:`ref-tasks-compile` tasks do
+ nothing: It is usually sufficient to just not define these tasks in
+ the recipe, because the default implementations do nothing unless a
+ Makefile is found in
+ ``${``\ :term:`S`\ ``}``.
+
+ If ``${S}`` might contain a Makefile, or if you inherit some class
+ that replaces :ref:`ref-tasks-configure` and :ref:`ref-tasks-compile` with custom
+ versions, then you can use the
+ ``[``\ :ref:`noexec <bitbake-user-manual/bitbake-user-manual-metadata:variable flags>`\ ``]``
+ flag to turn the tasks into no-ops, as follows::
+
+ do_configure[noexec] = "1"
+ do_compile[noexec] = "1"
+
+ Unlike :ref:`bitbake-user-manual/bitbake-user-manual-metadata:deleting a task`,
+ using the flag preserves the dependency chain from the :ref:`ref-tasks-fetch`,
+ :ref:`ref-tasks-unpack`, and :ref:`ref-tasks-patch` tasks to the
+ :ref:`ref-tasks-install` task.
+
+- Make sure your :ref:`ref-tasks-install` task installs the binaries
+ appropriately.
+
+- Ensure that you set up :term:`FILES`
+ (usually
+ ``FILES:${``\ :term:`PN`\ ``}``) to
+ point to the files you have installed, which of course depends on
+ where you have installed them and whether those files are in
+ different locations than the defaults.
+
+Following Recipe Style Guidelines
+=================================
+
+When writing recipes, it is good to conform to existing style guidelines.
+See the ":doc:`../contributor-guide/recipe-style-guide`" in the Yocto Project
+and OpenEmbedded Contributor Guide for reference.
+
+It is common for existing recipes to deviate a bit from this style.
+However, aiming for at least a consistent style is a good idea. Some
+practices, such as omitting spaces around ``=`` operators in assignments
+or ordering recipe components in an erratic way, are widely seen as poor
+style.
+
+Recipe Syntax
+=============
+
+Understanding recipe file syntax is important for writing recipes. The
+following list overviews the basic items that make up a BitBake recipe
+file. For more complete BitBake syntax descriptions, see the
+":doc:`bitbake:bitbake-user-manual/bitbake-user-manual-metadata`"
+chapter of the BitBake User Manual.
+
+- *Variable Assignments and Manipulations:* Variable assignments allow
+ a value to be assigned to a variable. The assignment can be static
+ text or might include the contents of other variables. In addition to
+ the assignment, appending and prepending operations are also
+ supported.
+
+ The following example shows some of the ways you can use variables in
+ recipes::
+
+ S = "${WORKDIR}/postfix-${PV}"
+ CFLAGS += "-DNO_ASM"
+ CFLAGS:append = " --enable-important-feature"
+
+- *Functions:* Functions provide a series of actions to be performed.
+ You usually use functions to override the default implementation of a
+ task function or to complement a default function (i.e. append or
+ prepend to an existing function). Standard functions use ``sh`` shell
+ syntax, although access to OpenEmbedded variables and internal
+ methods are also available.
+
+ Here is an example function from the ``sed`` recipe::
+
+ do_install () {
+ autotools_do_install
+ install -d ${D}${base_bindir}
+ mv ${D}${bindir}/sed ${D}${base_bindir}/sed
+ rmdir ${D}${bindir}/
+ }
+
+ It is
+ also possible to implement new functions that are called between
+ existing tasks as long as the new functions are not replacing or
+ complementing the default functions. You can implement functions in
+ Python instead of shell. Both of these options are not seen in the
+ majority of recipes.
+
+- *Keywords:* BitBake recipes use only a few keywords. You use keywords
+ to include common functions (``inherit``), load parts of a recipe
+ from other files (``include`` and ``require``) and export variables
+ to the environment (``export``).
+
+ The following example shows the use of some of these keywords::
+
+ export POSTCONF = "${STAGING_BINDIR}/postconf"
+ inherit autoconf
+ require otherfile.inc
+
+- *Comments (#):* Any lines that begin with the hash character (``#``)
+ are treated as comment lines and are ignored::
+
+ # This is a comment
+
+This next list summarizes the most important and most commonly used
+parts of the recipe syntax. For more information on these parts of the
+syntax, you can reference the
+":doc:`bitbake:bitbake-user-manual/bitbake-user-manual-metadata`" chapter
+in the BitBake User Manual.
+
+- *Line Continuation (\\):* Use the backward slash (``\``) character to
+ split a statement over multiple lines. Place the slash character at
+ the end of the line that is to be continued on the next line::
+
+ VAR = "A really long \
+ line"
+
+ .. note::
+
+ You cannot have any characters including spaces or tabs after the
+ slash character.
+
+- *Using Variables (${VARNAME}):* Use the ``${VARNAME}`` syntax to
+ access the contents of a variable::
+
+ SRC_URI = "${SOURCEFORGE_MIRROR}/libpng/zlib-${PV}.tar.gz"
+
+ .. note::
+
+ It is important to understand that the value of a variable
+ expressed in this form does not get substituted automatically. The
+ expansion of these expressions happens on-demand later (e.g.
+ usually when a function that makes reference to the variable
+ executes). This behavior ensures that the values are most
+ appropriate for the context in which they are finally used. On the
+ rare occasion that you do need the variable expression to be
+ expanded immediately, you can use the
+ :=
+ operator instead of
+ =
+ when you make the assignment, but this is not generally needed.
+
+- *Quote All Assignments ("value"):* Use double quotes around values in
+ all variable assignments (e.g. ``"value"``). Here is an example::
+
+ VAR1 = "${OTHERVAR}"
+ VAR2 = "The version is ${PV}"
+
+- *Conditional Assignment (?=):* Conditional assignment is used to
+ assign a value to a variable, but only when the variable is currently
+ unset. Use the question mark followed by the equal sign (``?=``) to
+ make a "soft" assignment used for conditional assignment. Typically,
+ "soft" assignments are used in the ``local.conf`` file for variables
+ that are allowed to come through from the external environment.
+
+ Here is an example where ``VAR1`` is set to "New value" if it is
+ currently empty. However, if ``VAR1`` has already been set, it
+ remains unchanged::
+
+ VAR1 ?= "New value"
+
+ In this next example, ``VAR1`` is left with the value "Original value"::
+
+ VAR1 = "Original value"
+ VAR1 ?= "New value"
+
+- *Appending (+=):* Use the plus character followed by the equals sign
+ (``+=``) to append values to existing variables.
+
+ .. note::
+
+ This operator adds a space between the existing content of the
+ variable and the new content.
+
+ Here is an example::
+
+ SRC_URI += "file://fix-makefile.patch"
+
+- *Prepending (=+):* Use the equals sign followed by the plus character
+ (``=+``) to prepend values to existing variables.
+
+ .. note::
+
+ This operator adds a space between the new content and the
+ existing content of the variable.
+
+ Here is an example::
+
+ VAR =+ "Starts"
+
+- *Appending (:append):* Use the ``:append`` operator to append values
+ to existing variables. This operator does not add any additional
+ space. Also, the operator is applied after all the ``+=``, and ``=+``
+ operators have been applied and after all ``=`` assignments have
+ occurred. This means that if ``:append`` is used in a recipe, it can
+ only be overridden by another layer using the special ``:remove``
+ operator, which in turn will prevent further layers from adding it back.
+
+ The following example shows the space being explicitly added to the
+ start to ensure the appended value is not merged with the existing
+ value::
+
+ CFLAGS:append = " --enable-important-feature"
+
+ You can also use
+ the ``:append`` operator with overrides, which results in the actions
+ only being performed for the specified target or machine::
+
+ CFLAGS:append:sh4 = " --enable-important-sh4-specific-feature"
+
+- *Prepending (:prepend):* Use the ``:prepend`` operator to prepend
+ values to existing variables. This operator does not add any
+ additional space. Also, the operator is applied after all the ``+=``,
+ and ``=+`` operators have been applied and after all ``=``
+ assignments have occurred.
+
+ The following example shows the space being explicitly added to the
+ end to ensure the prepended value is not merged with the existing
+ value::
+
+ CFLAGS:prepend = "-I${S}/myincludes "
+
+ You can also use the
+ ``:prepend`` operator with overrides, which results in the actions
+ only being performed for the specified target or machine::
+
+ CFLAGS:prepend:sh4 = "-I${S}/myincludes "
+
+- *Overrides:* You can use overrides to set a value conditionally,
+ typically based on how the recipe is being built. For example, to set
+ the :term:`KBRANCH` variable's
+ value to "standard/base" for any target
+ :term:`MACHINE`, except for
+ qemuarm where it should be set to "standard/arm-versatile-926ejs",
+ you would do the following::
+
+ KBRANCH = "standard/base"
+ KBRANCH:qemuarm = "standard/arm-versatile-926ejs"
+
+ Overrides are also used to separate
+ alternate values of a variable in other situations. For example, when
+ setting variables such as
+ :term:`FILES` and
+ :term:`RDEPENDS` that are
+ specific to individual packages produced by a recipe, you should
+ always use an override that specifies the name of the package.
+
+- *Indentation:* Use spaces for indentation rather than tabs. For
+ shell functions, both currently work. However, it is a policy
+ decision of the Yocto Project to use tabs in shell functions. Realize
+ that some layers have a policy to use spaces for all indentation.
+
+- *Using Python for Complex Operations:* For more advanced processing,
+ it is possible to use Python code during variable assignments (e.g.
+ search and replacement on a variable).
+
+ You indicate Python code using the ``${@python_code}`` syntax for the
+ variable assignment::
+
+ SRC_URI = "ftp://ftp.info-zip.org/pub/infozip/src/zip${@d.getVar('PV',1).replace('.', '')}.tgz
+
+- *Shell Function Syntax:* Write shell functions as if you were writing
+ a shell script when you describe a list of actions to take. You
+ should ensure that your script works with a generic ``sh`` and that
+ it does not require any ``bash`` or other shell-specific
+ functionality. The same considerations apply to various system
+ utilities (e.g. ``sed``, ``grep``, ``awk``, and so forth) that you
+ might wish to use. If in doubt, you should check with multiple
+ implementations --- including those from BusyBox.
+
diff --git a/documentation/dev-manual/packages.rst b/documentation/dev-manual/packages.rst
new file mode 100644
index 0000000000..e5028fffdc
--- /dev/null
+++ b/documentation/dev-manual/packages.rst
@@ -0,0 +1,1250 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Working with Packages
+*********************
+
+This section describes a few tasks that involve packages:
+
+- :ref:`dev-manual/packages:excluding packages from an image`
+
+- :ref:`dev-manual/packages:incrementing a package version`
+
+- :ref:`dev-manual/packages:handling optional module packaging`
+
+- :ref:`dev-manual/packages:using runtime package management`
+
+- :ref:`dev-manual/packages:generating and using signed packages`
+
+- :ref:`Setting up and running package test
+ (ptest) <dev-manual/packages:testing packages with ptest>`
+
+- :ref:`dev-manual/packages:creating node package manager (npm) packages`
+
+- :ref:`dev-manual/packages:adding custom metadata to packages`
+
+Excluding Packages from an Image
+================================
+
+You might find it necessary to prevent specific packages from being
+installed into an image. If so, you can use several variables to direct
+the build system to essentially ignore installing recommended packages
+or to not install a package at all.
+
+The following list introduces variables you can use to prevent packages
+from being installed into your image. Each of these variables only works
+with IPK and RPM package types, not for Debian packages.
+Also, you can use these variables from your ``local.conf`` file
+or attach them to a specific image recipe by using a recipe name
+override. For more detail on the variables, see the descriptions in the
+Yocto Project Reference Manual's glossary chapter.
+
+- :term:`BAD_RECOMMENDATIONS`:
+ Use this variable to specify "recommended-only" packages that you do
+ not want installed.
+
+- :term:`NO_RECOMMENDATIONS`:
+ Use this variable to prevent all "recommended-only" packages from
+ being installed.
+
+- :term:`PACKAGE_EXCLUDE`:
+ Use this variable to prevent specific packages from being installed
+ regardless of whether they are "recommended-only" or not. You need to
+ realize that the build process could fail with an error when you
+ prevent the installation of a package whose presence is required by
+ an installed package.
+
+Incrementing a Package Version
+==============================
+
+This section provides some background on how binary package versioning
+is accomplished and presents some of the services, variables, and
+terminology involved.
+
+In order to understand binary package versioning, you need to consider
+the following:
+
+- Binary Package: The binary package that is eventually built and
+ installed into an image.
+
+- Binary Package Version: The binary package version is composed of two
+ components --- a version and a revision.
+
+ .. note::
+
+ Technically, a third component, the "epoch" (i.e. :term:`PE`) is involved
+ but this discussion for the most part ignores :term:`PE`.
+
+ The version and revision are taken from the
+ :term:`PV` and
+ :term:`PR` variables, respectively.
+
+- :term:`PV`: The recipe version. :term:`PV` represents the version of the
+ software being packaged. Do not confuse :term:`PV` with the binary
+ package version.
+
+- :term:`PR`: The recipe revision.
+
+- :term:`SRCPV`: The OpenEmbedded
+ build system uses this string to help define the value of :term:`PV` when
+ the source code revision needs to be included in it.
+
+- :yocto_wiki:`PR Service </PR_Service>`: A
+ network-based service that helps automate keeping package feeds
+ compatible with existing package manager applications such as RPM,
+ APT, and OPKG.
+
+Whenever the binary package content changes, the binary package version
+must change. Changing the binary package version is accomplished by
+changing or "bumping" the :term:`PR` and/or :term:`PV` values. Increasing these
+values occurs one of two ways:
+
+- Automatically using a Package Revision Service (PR Service).
+
+- Manually incrementing the :term:`PR` and/or :term:`PV` variables.
+
+Given a primary challenge of any build system and its users is how to
+maintain a package feed that is compatible with existing package manager
+applications such as RPM, APT, and OPKG, using an automated system is
+much preferred over a manual system. In either system, the main
+requirement is that binary package version numbering increases in a
+linear fashion and that there is a number of version components that
+support that linear progression. For information on how to ensure
+package revisioning remains linear, see the
+":ref:`dev-manual/packages:automatically incrementing a package version number`"
+section.
+
+The following three sections provide related information on the PR
+Service, the manual method for "bumping" :term:`PR` and/or :term:`PV`, and on
+how to ensure binary package revisioning remains linear.
+
+Working With a PR Service
+-------------------------
+
+As mentioned, attempting to maintain revision numbers in the
+:term:`Metadata` is error prone, inaccurate,
+and causes problems for people submitting recipes. Conversely, the PR
+Service automatically generates increasing numbers, particularly the
+revision field, which removes the human element.
+
+.. note::
+
+ For additional information on using a PR Service, you can see the
+ :yocto_wiki:`PR Service </PR_Service>` wiki page.
+
+The Yocto Project uses variables in order of decreasing priority to
+facilitate revision numbering (i.e.
+:term:`PE`,
+:term:`PV`, and
+:term:`PR` for epoch, version, and
+revision, respectively). The values are highly dependent on the policies
+and procedures of a given distribution and package feed.
+
+Because the OpenEmbedded build system uses
+":ref:`signatures <overview-manual/concepts:checksums (signatures)>`", which are
+unique to a given build, the build system knows when to rebuild
+packages. All the inputs into a given task are represented by a
+signature, which can trigger a rebuild when different. Thus, the build
+system itself does not rely on the :term:`PR`, :term:`PV`, and :term:`PE` numbers to
+trigger a rebuild. The signatures, however, can be used to generate
+these values.
+
+The PR Service works with both ``OEBasic`` and ``OEBasicHash``
+generators. The value of :term:`PR` bumps when the checksum changes and the
+different generator mechanisms change signatures under different
+circumstances.
+
+As implemented, the build system includes values from the PR Service
+into the :term:`PR` field as an addition using the form "``.x``" so ``r0``
+becomes ``r0.1``, ``r0.2`` and so forth. This scheme allows existing
+:term:`PR` values to be used for whatever reasons, which include manual
+:term:`PR` bumps, should it be necessary.
+
+By default, the PR Service is not enabled or running. Thus, the packages
+generated are just "self consistent". The build system adds and removes
+packages and there are no guarantees about upgrade paths but images will
+be consistent and correct with the latest changes.
+
+The simplest form for a PR Service is for a single host development system
+that builds the package feed (building system). For this scenario, you can
+enable a local PR Service by setting :term:`PRSERV_HOST` in your
+``local.conf`` file in the :term:`Build Directory`::
+
+ PRSERV_HOST = "localhost:0"
+
+Once the service is started, packages will automatically
+get increasing :term:`PR` values and BitBake takes care of starting and
+stopping the server.
+
+If you have a more complex setup where multiple host development systems
+work against a common, shared package feed, you have a single PR Service
+running and it is connected to each building system. For this scenario,
+you need to start the PR Service using the ``bitbake-prserv`` command::
+
+ bitbake-prserv --host ip --port port --start
+
+In addition to
+hand-starting the service, you need to update the ``local.conf`` file of
+each building system as described earlier so each system points to the
+server and port.
+
+It is also recommended you use build history, which adds some sanity
+checks to binary package versions, in conjunction with the server that
+is running the PR Service. To enable build history, add the following to
+each building system's ``local.conf`` file::
+
+ # It is recommended to activate "buildhistory" for testing the PR service
+ INHERIT += "buildhistory"
+ BUILDHISTORY_COMMIT = "1"
+
+For information on build
+history, see the
+":ref:`dev-manual/build-quality:maintaining build output quality`" section.
+
+.. note::
+
+ The OpenEmbedded build system does not maintain :term:`PR` information as
+ part of the shared state (sstate) packages. If you maintain an sstate
+ feed, it's expected that either all your building systems that
+ contribute to the sstate feed use a shared PR service, or you do not
+ run a PR service on any of your building systems.
+
+ That's because if you had multiple machines sharing a PR service but
+ not their sstate feed, you could end up with "diverging" hashes for
+ the same output artefacts. When presented to the share PR service,
+ each would be considered as new and would increase the revision
+ number, causing many unnecessary package upgrades.
+
+ For more information on shared state, see the
+ ":ref:`overview-manual/concepts:shared state cache`"
+ section in the Yocto Project Overview and Concepts Manual.
+
+Manually Bumping PR
+-------------------
+
+The alternative to setting up a PR Service is to manually "bump" the
+:term:`PR` variable.
+
+If a committed change results in changing the package output, then the
+value of the :term:`PR` variable needs to be increased (or "bumped") as part of
+that commit. For new recipes you should add the :term:`PR` variable and set
+its initial value equal to "r0", which is the default. Even though the
+default value is "r0", the practice of adding it to a new recipe makes
+it harder to forget to bump the variable when you make changes to the
+recipe in future.
+
+Usually, version increases occur only to binary packages. However, if
+for some reason :term:`PV` changes but does not increase, you can increase
+the :term:`PE` variable (Package Epoch). The :term:`PE` variable defaults to
+"0".
+
+Binary package version numbering strives to follow the `Debian Version
+Field Policy
+Guidelines <https://www.debian.org/doc/debian-policy/ch-controlfields.html>`__.
+These guidelines define how versions are compared and what "increasing"
+a version means.
+
+Automatically Incrementing a Package Version Number
+---------------------------------------------------
+
+When fetching a repository, BitBake uses the
+:term:`SRCREV` variable to determine
+the specific source code revision from which to build. You set the
+:term:`SRCREV` variable to
+:term:`AUTOREV` to cause the
+OpenEmbedded build system to automatically use the latest revision of
+the software::
+
+ SRCREV = "${AUTOREV}"
+
+Furthermore, you need to reference :term:`SRCPV` in :term:`PV` in order to
+automatically update the version whenever the revision of the source
+code changes. Here is an example::
+
+ PV = "1.0+git${SRCPV}"
+
+The OpenEmbedded build system substitutes :term:`SRCPV` with the following:
+
+.. code-block:: none
+
+ AUTOINC+source_code_revision
+
+The build system replaces the ``AUTOINC``
+with a number. The number used depends on the state of the PR Service:
+
+- If PR Service is enabled, the build system increments the number,
+ which is similar to the behavior of
+ :term:`PR`. This behavior results in
+ linearly increasing package versions, which is desirable. Here is an
+ example:
+
+ .. code-block:: none
+
+ hello-world-git_0.0+git0+b6558dd387-r0.0_armv7a-neon.ipk
+ hello-world-git_0.0+git1+dd2f5c3565-r0.0_armv7a-neon.ipk
+
+- If PR Service is not enabled, the build system replaces the
+ ``AUTOINC`` placeholder with zero (i.e. "0"). This results in
+ changing the package version since the source revision is included.
+ However, package versions are not increased linearly. Here is an
+ example:
+
+ .. code-block:: none
+
+ hello-world-git_0.0+git0+b6558dd387-r0.0_armv7a-neon.ipk
+ hello-world-git_0.0+git0+dd2f5c3565-r0.0_armv7a-neon.ipk
+
+In summary, the OpenEmbedded build system does not track the history of
+binary package versions for this purpose. ``AUTOINC``, in this case, is
+comparable to :term:`PR`. If PR server is not enabled, ``AUTOINC`` in the
+package version is simply replaced by "0". If PR server is enabled, the
+build system keeps track of the package versions and bumps the number
+when the package revision changes.
+
+Handling Optional Module Packaging
+==================================
+
+Many pieces of software split functionality into optional modules (or
+plugins) and the plugins that are built might depend on configuration
+options. To avoid having to duplicate the logic that determines what
+modules are available in your recipe or to avoid having to package each
+module by hand, the OpenEmbedded build system provides functionality to
+handle module packaging dynamically.
+
+To handle optional module packaging, you need to do two things:
+
+- Ensure the module packaging is actually done.
+
+- Ensure that any dependencies on optional modules from other recipes
+ are satisfied by your recipe.
+
+Making Sure the Packaging is Done
+---------------------------------
+
+To ensure the module packaging actually gets done, you use the
+``do_split_packages`` function within the ``populate_packages`` Python
+function in your recipe. The ``do_split_packages`` function searches for
+a pattern of files or directories under a specified path and creates a
+package for each one it finds by appending to the
+:term:`PACKAGES` variable and
+setting the appropriate values for ``FILES:packagename``,
+``RDEPENDS:packagename``, ``DESCRIPTION:packagename``, and so forth.
+Here is an example from the ``lighttpd`` recipe::
+
+ python populate_packages:prepend () {
+ lighttpd_libdir = d.expand('${libdir}')
+ do_split_packages(d, lighttpd_libdir, '^mod_(.*).so$',
+ 'lighttpd-module-%s', 'Lighttpd module for %s',
+ extra_depends='')
+ }
+
+The previous example specifies a number of things in the call to
+``do_split_packages``.
+
+- A directory within the files installed by your recipe through
+ :ref:`ref-tasks-install` in which to search.
+
+- A regular expression used to match module files in that directory. In
+ the example, note the parentheses () that mark the part of the
+ expression from which the module name should be derived.
+
+- A pattern to use for the package names.
+
+- A description for each package.
+
+- An empty string for ``extra_depends``, which disables the default
+ dependency on the main ``lighttpd`` package. Thus, if a file in
+ ``${libdir}`` called ``mod_alias.so`` is found, a package called
+ ``lighttpd-module-alias`` is created for it and the
+ :term:`DESCRIPTION` is set to
+ "Lighttpd module for alias".
+
+Often, packaging modules is as simple as the previous example. However,
+there are more advanced options that you can use within
+``do_split_packages`` to modify its behavior. And, if you need to, you
+can add more logic by specifying a hook function that is called for each
+package. It is also perfectly acceptable to call ``do_split_packages``
+multiple times if you have more than one set of modules to package.
+
+For more examples that show how to use ``do_split_packages``, see the
+``connman.inc`` file in the ``meta/recipes-connectivity/connman/``
+directory of the ``poky`` :ref:`source repository <overview-manual/development-environment:yocto project source repositories>`. You can
+also find examples in ``meta/classes-recipe/kernel.bbclass``.
+
+Here is a reference that shows ``do_split_packages`` mandatory and
+optional arguments::
+
+ Mandatory arguments
+
+ root
+ The path in which to search
+ file_regex
+ Regular expression to match searched files.
+ Use parentheses () to mark the part of this
+ expression that should be used to derive the
+ module name (to be substituted where %s is
+ used in other function arguments as noted below)
+ output_pattern
+ Pattern to use for the package names. Must
+ include %s.
+ description
+ Description to set for each package. Must
+ include %s.
+
+ Optional arguments
+
+ postinst
+ Postinstall script to use for all packages
+ (as a string)
+ recursive
+ True to perform a recursive search --- default
+ False
+ hook
+ A hook function to be called for every match.
+ The function will be called with the following
+ arguments (in the order listed):
+
+ f
+ Full path to the file/directory match
+ pkg
+ The package name
+ file_regex
+ As above
+ output_pattern
+ As above
+ modulename
+ The module name derived using file_regex
+ extra_depends
+ Extra runtime dependencies (RDEPENDS) to be
+ set for all packages. The default value of None
+ causes a dependency on the main package
+ (${PN}) --- if you do not want this, pass empty
+ string '' for this parameter.
+ aux_files_pattern
+ Extra item(s) to be added to FILES for each
+ package. Can be a single string item or a list
+ of strings for multiple items. Must include %s.
+ postrm
+ postrm script to use for all packages (as a
+ string)
+ allow_dirs
+ True to allow directories to be matched -
+ default False
+ prepend
+ If True, prepend created packages to PACKAGES
+ instead of the default False which appends them
+ match_path
+ match file_regex on the whole relative path to
+ the root rather than just the filename
+ aux_files_pattern_verbatim
+ Extra item(s) to be added to FILES for each
+ package, using the actual derived module name
+ rather than converting it to something legal
+ for a package name. Can be a single string item
+ or a list of strings for multiple items. Must
+ include %s.
+ allow_links
+ True to allow symlinks to be matched --- default
+ False
+ summary
+ Summary to set for each package. Must include %s;
+ defaults to description if not set.
+
+
+
+Satisfying Dependencies
+-----------------------
+
+The second part for handling optional module packaging is to ensure that
+any dependencies on optional modules from other recipes are satisfied by
+your recipe. You can be sure these dependencies are satisfied by using
+the :term:`PACKAGES_DYNAMIC`
+variable. Here is an example that continues with the ``lighttpd`` recipe
+shown earlier::
+
+ PACKAGES_DYNAMIC = "lighttpd-module-.*"
+
+The name
+specified in the regular expression can of course be anything. In this
+example, it is ``lighttpd-module-`` and is specified as the prefix to
+ensure that any :term:`RDEPENDS` and
+:term:`RRECOMMENDS` on a package
+name starting with the prefix are satisfied during build time. If you
+are using ``do_split_packages`` as described in the previous section,
+the value you put in :term:`PACKAGES_DYNAMIC` should correspond to the name
+pattern specified in the call to ``do_split_packages``.
+
+Using Runtime Package Management
+================================
+
+During a build, BitBake always transforms a recipe into one or more
+packages. For example, BitBake takes the ``bash`` recipe and produces a
+number of packages (e.g. ``bash``, ``bash-bashbug``,
+``bash-completion``, ``bash-completion-dbg``, ``bash-completion-dev``,
+``bash-completion-extra``, ``bash-dbg``, and so forth). Not all
+generated packages are included in an image.
+
+In several situations, you might need to update, add, remove, or query
+the packages on a target device at runtime (i.e. without having to
+generate a new image). Examples of such situations include:
+
+- You want to provide in-the-field updates to deployed devices (e.g.
+ security updates).
+
+- You want to have a fast turn-around development cycle for one or more
+ applications that run on your device.
+
+- You want to temporarily install the "debug" packages of various
+ applications on your device so that debugging can be greatly improved
+ by allowing access to symbols and source debugging.
+
+- You want to deploy a more minimal package selection of your device
+ but allow in-the-field updates to add a larger selection for
+ customization.
+
+In all these situations, you have something similar to a more
+traditional Linux distribution in that in-field devices are able to
+receive pre-compiled packages from a server for installation or update.
+Being able to install these packages on a running, in-field device is
+what is termed "runtime package management".
+
+In order to use runtime package management, you need a host or server
+machine that serves up the pre-compiled packages plus the required
+metadata. You also need package manipulation tools on the target. The
+build machine is a likely candidate to act as the server. However, that
+machine does not necessarily have to be the package server. The build
+machine could push its artifacts to another machine that acts as the
+server (e.g. Internet-facing). In fact, doing so is advantageous for a
+production environment as getting the packages away from the development
+system's :term:`Build Directory` prevents accidental overwrites.
+
+A simple build that targets just one device produces more than one
+package database. In other words, the packages produced by a build are
+separated out into a couple of different package groupings based on
+criteria such as the target's CPU architecture, the target board, or the
+C library used on the target. For example, a build targeting the
+``qemux86`` device produces the following three package databases:
+``noarch``, ``i586``, and ``qemux86``. If you wanted your ``qemux86``
+device to be aware of all the packages that were available to it, you
+would need to point it to each of these databases individually. In a
+similar way, a traditional Linux distribution usually is configured to
+be aware of a number of software repositories from which it retrieves
+packages.
+
+Using runtime package management is completely optional and not required
+for a successful build or deployment in any way. But if you want to make
+use of runtime package management, you need to do a couple things above
+and beyond the basics. The remainder of this section describes what you
+need to do.
+
+Build Considerations
+--------------------
+
+This section describes build considerations of which you need to be
+aware in order to provide support for runtime package management.
+
+When BitBake generates packages, it needs to know what format or formats
+to use. In your configuration, you use the
+:term:`PACKAGE_CLASSES`
+variable to specify the format:
+
+#. Open the ``local.conf`` file inside your :term:`Build Directory` (e.g.
+ ``poky/build/conf/local.conf``).
+
+#. Select the desired package format as follows::
+
+ PACKAGE_CLASSES ?= "package_packageformat"
+
+ where packageformat can be "ipk", "rpm",
+ "deb", or "tar" which are the supported package formats.
+
+ .. note::
+
+ Because the Yocto Project supports four different package formats,
+ you can set the variable with more than one argument. However, the
+ OpenEmbedded build system only uses the first argument when
+ creating an image or Software Development Kit (SDK).
+
+If you would like your image to start off with a basic package database
+containing the packages in your current build as well as to have the
+relevant tools available on the target for runtime package management,
+you can include "package-management" in the
+:term:`IMAGE_FEATURES`
+variable. Including "package-management" in this configuration variable
+ensures that when the image is assembled for your target, the image
+includes the currently-known package databases as well as the
+target-specific tools required for runtime package management to be
+performed on the target. However, this is not strictly necessary. You
+could start your image off without any databases but only include the
+required on-target package tool(s). As an example, you could include
+"opkg" in your
+:term:`IMAGE_INSTALL` variable
+if you are using the IPK package format. You can then initialize your
+target's package database(s) later once your image is up and running.
+
+Whenever you perform any sort of build step that can potentially
+generate a package or modify existing package, it is always a good idea
+to re-generate the package index after the build by using the following
+command::
+
+ $ bitbake package-index
+
+It might be tempting to build the
+package and the package index at the same time with a command such as
+the following::
+
+ $ bitbake some-package package-index
+
+Do not do this as
+BitBake does not schedule the package index for after the completion of
+the package you are building. Consequently, you cannot be sure of the
+package index including information for the package you just built.
+Thus, be sure to run the package update step separately after building
+any packages.
+
+You can use the
+:term:`PACKAGE_FEED_ARCHS`,
+:term:`PACKAGE_FEED_BASE_PATHS`,
+and
+:term:`PACKAGE_FEED_URIS`
+variables to pre-configure target images to use a package feed. If you
+do not define these variables, then manual steps as described in the
+subsequent sections are necessary to configure the target. You should
+set these variables before building the image in order to produce a
+correctly configured image.
+
+.. note::
+
+ Your image will need enough free storage space to run package upgrades,
+ especially if many of them need to be downloaded at the same time.
+ You should make sure images are created with enough free space
+ by setting the :term:`IMAGE_ROOTFS_EXTRA_SPACE` variable.
+
+When your build is complete, your packages reside in the
+``${TMPDIR}/deploy/packageformat`` directory. For example, if
+``${``\ :term:`TMPDIR`\ ``}`` is
+``tmp`` and your selected package type is RPM, then your RPM packages
+are available in ``tmp/deploy/rpm``.
+
+Host or Server Machine Setup
+----------------------------
+
+Although other protocols are possible, a server using HTTP typically
+serves packages. If you want to use HTTP, then set up and configure a
+web server such as Apache 2, lighttpd, or Python web server on the
+machine serving the packages.
+
+To keep things simple, this section describes how to set up a
+Python web server to share package feeds from the developer's
+machine. Although this server might not be the best for a production
+environment, the setup is simple and straight forward. Should you want
+to use a different server more suited for production (e.g. Apache 2,
+Lighttpd, or Nginx), take the appropriate steps to do so.
+
+From within the :term:`Build Directory` where you have built an image based on
+your packaging choice (i.e. the :term:`PACKAGE_CLASSES` setting), simply start
+the server. The following example assumes a :term:`Build Directory` of ``poky/build``
+and a :term:`PACKAGE_CLASSES` setting of ":ref:`ref-classes-package_rpm`"::
+
+ $ cd poky/build/tmp/deploy/rpm
+ $ python3 -m http.server
+
+Target Setup
+------------
+
+Setting up the target differs depending on the package management
+system. This section provides information for RPM, IPK, and DEB.
+
+Using RPM
+~~~~~~~~~
+
+The :wikipedia:`Dandified Packaging <DNF_(software)>` (DNF) performs
+runtime package management of RPM packages. In order to use DNF for
+runtime package management, you must perform an initial setup on the
+target machine for cases where the ``PACKAGE_FEED_*`` variables were not
+set as part of the image that is running on the target. This means if
+you built your image and did not use these variables as part of the
+build and your image is now running on the target, you need to perform
+the steps in this section if you want to use runtime package management.
+
+.. note::
+
+ For information on the ``PACKAGE_FEED_*`` variables, see
+ :term:`PACKAGE_FEED_ARCHS`, :term:`PACKAGE_FEED_BASE_PATHS`, and
+ :term:`PACKAGE_FEED_URIS` in the Yocto Project Reference Manual variables
+ glossary.
+
+On the target, you must inform DNF that package databases are available.
+You do this by creating a file named
+``/etc/yum.repos.d/oe-packages.repo`` and defining the ``oe-packages``.
+
+As an example, assume the target is able to use the following package
+databases: ``all``, ``i586``, and ``qemux86`` from a server named
+``my.server``. The specifics for setting up the web server are up to
+you. The critical requirement is that the URIs in the target repository
+configuration point to the correct remote location for the feeds.
+
+.. note::
+
+ For development purposes, you can point the web server to the build
+ system's ``deploy`` directory. However, for production use, it is better to
+ copy the package directories to a location outside of the build area and use
+ that location. Doing so avoids situations where the build system
+ overwrites or changes the ``deploy`` directory.
+
+When telling DNF where to look for the package databases, you must
+declare individual locations per architecture or a single location used
+for all architectures. You cannot do both:
+
+- *Create an Explicit List of Architectures:* Define individual base
+ URLs to identify where each package database is located:
+
+ .. code-block:: none
+
+ [oe-packages]
+ baseurl=http://my.server/rpm/i586 http://my.server/rpm/qemux86 http://my.server/rpm/all
+
+ This example
+ informs DNF about individual package databases for all three
+ architectures.
+
+- *Create a Single (Full) Package Index:* Define a single base URL that
+ identifies where a full package database is located::
+
+ [oe-packages]
+ baseurl=http://my.server/rpm
+
+ This example informs DNF about a single
+ package database that contains all the package index information for
+ all supported architectures.
+
+Once you have informed DNF where to find the package databases, you need
+to fetch them:
+
+.. code-block:: none
+
+ # dnf makecache
+
+DNF is now able to find, install, and
+upgrade packages from the specified repository or repositories.
+
+.. note::
+
+ See the `DNF documentation <https://dnf.readthedocs.io/en/latest/>`__ for
+ additional information.
+
+Using IPK
+~~~~~~~~~
+
+The ``opkg`` application performs runtime package management of IPK
+packages. You must perform an initial setup for ``opkg`` on the target
+machine if the
+:term:`PACKAGE_FEED_ARCHS`,
+:term:`PACKAGE_FEED_BASE_PATHS`,
+and
+:term:`PACKAGE_FEED_URIS`
+variables have not been set or the target image was built before the
+variables were set.
+
+The ``opkg`` application uses configuration files to find available
+package databases. Thus, you need to create a configuration file inside
+the ``/etc/opkg/`` directory, which informs ``opkg`` of any repository
+you want to use.
+
+As an example, suppose you are serving packages from a ``ipk/``
+directory containing the ``i586``, ``all``, and ``qemux86`` databases
+through an HTTP server named ``my.server``. On the target, create a
+configuration file (e.g. ``my_repo.conf``) inside the ``/etc/opkg/``
+directory containing the following:
+
+.. code-block:: none
+
+ src/gz all http://my.server/ipk/all
+ src/gz i586 http://my.server/ipk/i586
+ src/gz qemux86 http://my.server/ipk/qemux86
+
+Next, instruct ``opkg`` to fetch the
+repository information:
+
+.. code-block:: none
+
+ # opkg update
+
+The ``opkg`` application is now able to find, install, and upgrade packages
+from the specified repository.
+
+Using DEB
+~~~~~~~~~
+
+The ``apt`` application performs runtime package management of DEB
+packages. This application uses a source list file to find available
+package databases. You must perform an initial setup for ``apt`` on the
+target machine if the
+:term:`PACKAGE_FEED_ARCHS`,
+:term:`PACKAGE_FEED_BASE_PATHS`,
+and
+:term:`PACKAGE_FEED_URIS`
+variables have not been set or the target image was built before the
+variables were set.
+
+To inform ``apt`` of the repository you want to use, you might create a
+list file (e.g. ``my_repo.list``) inside the
+``/etc/apt/sources.list.d/`` directory. As an example, suppose you are
+serving packages from a ``deb/`` directory containing the ``i586``,
+``all``, and ``qemux86`` databases through an HTTP server named
+``my.server``. The list file should contain:
+
+.. code-block:: none
+
+ deb http://my.server/deb/all ./
+ deb http://my.server/deb/i586 ./
+ deb http://my.server/deb/qemux86 ./
+
+Next, instruct the ``apt`` application
+to fetch the repository information:
+
+.. code-block:: none
+
+ $ sudo apt update
+
+After this step,
+``apt`` is able to find, install, and upgrade packages from the
+specified repository.
+
+Generating and Using Signed Packages
+====================================
+
+In order to add security to RPM packages used during a build, you can
+take steps to securely sign them. Once a signature is verified, the
+OpenEmbedded build system can use the package in the build. If security
+fails for a signed package, the build system stops the build.
+
+This section describes how to sign RPM packages during a build and how
+to use signed package feeds (repositories) when doing a build.
+
+Signing RPM Packages
+--------------------
+
+To enable signing RPM packages, you must set up the following
+configurations in either your ``local.config`` or ``distro.config``
+file::
+
+ # Inherit sign_rpm.bbclass to enable signing functionality
+ INHERIT += " sign_rpm"
+ # Define the GPG key that will be used for signing.
+ RPM_GPG_NAME = "key_name"
+ # Provide passphrase for the key
+ RPM_GPG_PASSPHRASE = "passphrase"
+
+.. note::
+
+ Be sure to supply appropriate values for both `key_name` and
+ `passphrase`.
+
+Aside from the ``RPM_GPG_NAME`` and ``RPM_GPG_PASSPHRASE`` variables in
+the previous example, two optional variables related to signing are available:
+
+- *GPG_BIN:* Specifies a ``gpg`` binary/wrapper that is executed
+ when the package is signed.
+
+- *GPG_PATH:* Specifies the ``gpg`` home directory used when the
+ package is signed.
+
+Processing Package Feeds
+------------------------
+
+In addition to being able to sign RPM packages, you can also enable
+signed package feeds for IPK and RPM packages.
+
+The steps you need to take to enable signed package feed use are similar
+to the steps used to sign RPM packages. You must define the following in
+your ``local.config`` or ``distro.config`` file::
+
+ INHERIT += "sign_package_feed"
+ PACKAGE_FEED_GPG_NAME = "key_name"
+ PACKAGE_FEED_GPG_PASSPHRASE_FILE = "path_to_file_containing_passphrase"
+
+For signed package feeds, the passphrase must be specified in a separate file,
+which is pointed to by the ``PACKAGE_FEED_GPG_PASSPHRASE_FILE``
+variable. Regarding security, keeping a plain text passphrase out of the
+configuration is more secure.
+
+Aside from the ``PACKAGE_FEED_GPG_NAME`` and
+``PACKAGE_FEED_GPG_PASSPHRASE_FILE`` variables, three optional variables
+related to signed package feeds are available:
+
+- *GPG_BIN* Specifies a ``gpg`` binary/wrapper that is executed
+ when the package is signed.
+
+- *GPG_PATH:* Specifies the ``gpg`` home directory used when the
+ package is signed.
+
+- *PACKAGE_FEED_GPG_SIGNATURE_TYPE:* Specifies the type of ``gpg``
+ signature. This variable applies only to RPM and IPK package feeds.
+ Allowable values for the ``PACKAGE_FEED_GPG_SIGNATURE_TYPE`` are
+ "ASC", which is the default and specifies ascii armored, and "BIN",
+ which specifies binary.
+
+Testing Packages With ptest
+===========================
+
+A Package Test (ptest) runs tests against packages built by the
+OpenEmbedded build system on the target machine. A ptest contains at
+least two items: the actual test, and a shell script (``run-ptest``)
+that starts the test. The shell script that starts the test must not
+contain the actual test --- the script only starts the test. On the other
+hand, the test can be anything from a simple shell script that runs a
+binary and checks the output to an elaborate system of test binaries and
+data files.
+
+The test generates output in the format used by Automake::
+
+ result: testname
+
+where the result can be ``PASS``, ``FAIL``, or ``SKIP``, and
+the testname can be any identifying string.
+
+For a list of Yocto Project recipes that are already enabled with ptest,
+see the :yocto_wiki:`Ptest </Ptest>` wiki page.
+
+.. note::
+
+ A recipe is "ptest-enabled" if it inherits the :ref:`ref-classes-ptest`
+ class.
+
+Adding ptest to Your Build
+--------------------------
+
+To add package testing to your build, add the :term:`DISTRO_FEATURES` and
+:term:`EXTRA_IMAGE_FEATURES` variables to your ``local.conf`` file, which
+is found in the :term:`Build Directory`::
+
+ DISTRO_FEATURES:append = " ptest"
+ EXTRA_IMAGE_FEATURES += "ptest-pkgs"
+
+Once your build is complete, the ptest files are installed into the
+``/usr/lib/package/ptest`` directory within the image, where ``package``
+is the name of the package.
+
+Running ptest
+-------------
+
+The ``ptest-runner`` package installs a shell script that loops through
+all installed ptest test suites and runs them in sequence. Consequently,
+you might want to add this package to your image.
+
+Getting Your Package Ready
+--------------------------
+
+In order to enable a recipe to run installed ptests on target hardware,
+you need to prepare the recipes that build the packages you want to
+test. Here is what you have to do for each recipe:
+
+- *Be sure the recipe inherits the* :ref:`ref-classes-ptest` *class:*
+ Include the following line in each recipe::
+
+ inherit ptest
+
+- *Create run-ptest:* This script starts your test. Locate the
+ script where you will refer to it using
+ :term:`SRC_URI`. Here is an
+ example that starts a test for ``dbus``::
+
+ #!/bin/sh
+ cd test
+ make -k runtest-TESTS
+
+- *Ensure dependencies are met:* If the test adds build or runtime
+ dependencies that normally do not exist for the package (such as
+ requiring "make" to run the test suite), use the
+ :term:`DEPENDS` and
+ :term:`RDEPENDS` variables in
+ your recipe in order for the package to meet the dependencies. Here
+ is an example where the package has a runtime dependency on "make"::
+
+ RDEPENDS:${PN}-ptest += "make"
+
+- *Add a function to build the test suite:* Not many packages support
+ cross-compilation of their test suites. Consequently, you usually
+ need to add a cross-compilation function to the package.
+
+ Many packages based on Automake compile and run the test suite by
+ using a single command such as ``make check``. However, the host
+ ``make check`` builds and runs on the same computer, while
+ cross-compiling requires that the package is built on the host but
+ executed for the target architecture (though often, as in the case
+ for ptest, the execution occurs on the host). The built version of
+ Automake that ships with the Yocto Project includes a patch that
+ separates building and execution. Consequently, packages that use the
+ unaltered, patched version of ``make check`` automatically
+ cross-compiles.
+
+ Regardless, you still must add a ``do_compile_ptest`` function to
+ build the test suite. Add a function similar to the following to your
+ recipe::
+
+ do_compile_ptest() {
+ oe_runmake buildtest-TESTS
+ }
+
+- *Ensure special configurations are set:* If the package requires
+ special configurations prior to compiling the test code, you must
+ insert a ``do_configure_ptest`` function into the recipe.
+
+- *Install the test suite:* The :ref:`ref-classes-ptest` class
+ automatically copies the file ``run-ptest`` to the target and then runs make
+ ``install-ptest`` to run the tests. If this is not enough, you need
+ to create a ``do_install_ptest`` function and make sure it gets
+ called after the "make install-ptest" completes.
+
+Creating Node Package Manager (NPM) Packages
+============================================
+
+:wikipedia:`NPM <Npm_(software)>` is a package manager for the JavaScript
+programming language. The Yocto Project supports the NPM
+:ref:`fetcher <bitbake-user-manual/bitbake-user-manual-fetching:fetchers>`.
+You can use this fetcher in combination with
+:doc:`devtool </ref-manual/devtool-reference>` to create recipes that produce
+NPM packages.
+
+There are two workflows that allow you to create NPM packages using
+``devtool``: the NPM registry modules method and the NPM project code
+method.
+
+.. note::
+
+ While it is possible to create NPM recipes manually, using
+ ``devtool`` is far simpler.
+
+Additionally, some requirements and caveats exist.
+
+Requirements and Caveats
+------------------------
+
+You need to be aware of the following before using ``devtool`` to create
+NPM packages:
+
+- Of the two methods that you can use ``devtool`` to create NPM
+ packages, the registry approach is slightly simpler. However, you
+ might consider the project approach because you do not have to
+ publish your module in the `NPM registry <https://docs.npmjs.com/misc/registry>`__,
+ which is NPM's public registry.
+
+- Be familiar with
+ :doc:`devtool </ref-manual/devtool-reference>`.
+
+- The NPM host tools need the native ``nodejs-npm`` package, which is
+ part of the OpenEmbedded environment. You need to get the package by
+ cloning the :oe_git:`meta-openembedded </meta-openembedded>`
+ repository. Be sure to add the path to your local copy
+ to your ``bblayers.conf`` file.
+
+- ``devtool`` cannot detect native libraries in module dependencies.
+ Consequently, you must manually add packages to your recipe.
+
+- While deploying NPM packages, ``devtool`` cannot determine which
+ dependent packages are missing on the target (e.g. the node runtime
+ ``nodejs``). Consequently, you need to find out what files are
+ missing and be sure they are on the target.
+
+- Although you might not need NPM to run your node package, it is
+ useful to have NPM on your target. The NPM package name is
+ ``nodejs-npm``.
+
+Using the Registry Modules Method
+---------------------------------
+
+This section presents an example that uses the ``cute-files`` module,
+which is a file browser web application.
+
+.. note::
+
+ You must know the ``cute-files`` module version.
+
+The first thing you need to do is use ``devtool`` and the NPM fetcher to
+create the recipe::
+
+ $ devtool add "npm://registry.npmjs.org;package=cute-files;version=1.0.2"
+
+The
+``devtool add`` command runs ``recipetool create`` and uses the same
+fetch URI to download each dependency and capture license details where
+possible. The result is a generated recipe.
+
+After running for quite a long time, in particular building the
+``nodejs-native`` package, the command should end as follows::
+
+ INFO: Recipe /home/.../build/workspace/recipes/cute-files/cute-files_1.0.2.bb has been automatically created; further editing may be required to make it fully functional
+
+The recipe file is fairly simple and contains every license that
+``recipetool`` finds and includes the licenses in the recipe's
+:term:`LIC_FILES_CHKSUM`
+variables. You need to examine the variables and look for those with
+"unknown" in the :term:`LICENSE`
+field. You need to track down the license information for "unknown"
+modules and manually add the information to the recipe.
+
+``recipetool`` creates a "shrinkwrap" file for your recipe. Shrinkwrap
+files capture the version of all dependent modules. Many packages do not
+provide shrinkwrap files but ``recipetool`` will create a shrinkwrap file as it
+runs.
+
+.. note::
+
+ A package is created for each sub-module. This policy is the only
+ practical way to have the licenses for all of the dependencies
+ represented in the license manifest of the image.
+
+The ``devtool edit-recipe`` command lets you take a look at the recipe::
+
+ $ devtool edit-recipe cute-files
+ # Recipe created by recipetool
+ # This is the basis of a recipe and may need further editing in order to be fully functional.
+ # (Feel free to remove these comments when editing.)
+
+ SUMMARY = "Turn any folder on your computer into a cute file browser, available on the local network."
+ # WARNING: the following LICENSE and LIC_FILES_CHKSUM values are best guesses - it is
+ # your responsibility to verify that the values are complete and correct.
+ #
+ # NOTE: multiple licenses have been detected; they have been separated with &
+ # in the LICENSE value for now since it is a reasonable assumption that all
+ # of the licenses apply. If instead there is a choice between the multiple
+ # licenses then you should change the value to separate the licenses with |
+ # instead of &. If there is any doubt, check the accompanying documentation
+ # to determine which situation is applicable.
+
+ SUMMARY = "Turn any folder on your computer into a cute file browser, available on the local network."
+ LICENSE = "BSD-3-Clause & ISC & MIT"
+ LIC_FILES_CHKSUM = "file://LICENSE;md5=71d98c0a1db42956787b1909c74a86ca \
+ file://node_modules/accepts/LICENSE;md5=bf1f9ad1e2e1d507aef4883fff7103de \
+ file://node_modules/array-flatten/LICENSE;md5=44088ba57cb871a58add36ce51b8de08 \
+ ...
+ file://node_modules/cookie-signature/Readme.md;md5=57ae8b42de3dd0c1f22d5f4cf191e15a"
+
+ SRC_URI = " \
+ npm://registry.npmjs.org/;package=cute-files;version=${PV} \
+ npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
+ "
+
+ S = "${WORKDIR}/npm"
+
+ inherit npm
+
+ LICENSE:${PN} = "MIT"
+ LICENSE:${PN}-accepts = "MIT"
+ LICENSE:${PN}-array-flatten = "MIT"
+ ...
+ LICENSE:${PN}-vary = "MIT"
+
+Three key points in the previous example are:
+
+- :term:`SRC_URI` uses the NPM
+ scheme so that the NPM fetcher is used.
+
+- ``recipetool`` collects all the license information. If a
+ sub-module's license is unavailable, the sub-module's name appears in
+ the comments.
+
+- The ``inherit npm`` statement causes the :ref:`ref-classes-npm` class to
+ package up all the modules.
+
+You can run the following command to build the ``cute-files`` package::
+
+ $ devtool build cute-files
+
+Remember that ``nodejs`` must be installed on
+the target before your package.
+
+Assuming 192.168.7.2 for the target's IP address, use the following
+command to deploy your package::
+
+ $ devtool deploy-target -s cute-files root@192.168.7.2
+
+Once the package is installed on the target, you can
+test the application to show the contents of any directory::
+
+ $ cd /usr/lib/node_modules/cute-files
+ $ cute-files
+
+On a browser,
+go to ``http://192.168.7.2:3000`` and you see the following:
+
+.. image:: figures/cute-files-npm-example.png
+ :width: 100%
+
+You can find the recipe in ``workspace/recipes/cute-files``. You can use
+the recipe in any layer you choose.
+
+Using the NPM Projects Code Method
+----------------------------------
+
+Although it is useful to package modules already in the NPM registry,
+adding ``node.js`` projects under development is a more common developer
+use case.
+
+This section covers the NPM projects code method, which is very similar
+to the "registry" approach described in the previous section. In the NPM
+projects method, you provide ``devtool`` with an URL that points to the
+source files.
+
+Replicating the same example, (i.e. ``cute-files``) use the following
+command::
+
+ $ devtool add https://github.com/martinaglv/cute-files.git
+
+The recipe this command generates is very similar to the recipe created in
+the previous section. However, the :term:`SRC_URI` looks like the following::
+
+ SRC_URI = " \
+ git://github.com/martinaglv/cute-files.git;protocol=https;branch=master \
+ npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
+ "
+
+In this example,
+the main module is taken from the Git repository and dependencies are
+taken from the NPM registry. Other than those differences, the recipe is
+basically the same between the two methods. You can build and deploy the
+package exactly as described in the previous section that uses the
+registry modules method.
+
+Adding custom metadata to packages
+==================================
+
+The variable
+:term:`PACKAGE_ADD_METADATA`
+can be used to add additional metadata to packages. This is reflected in
+the package control/spec file. To take the ipk format for example, the
+CONTROL file stored inside would contain the additional metadata as
+additional lines.
+
+The variable can be used in multiple ways, including using suffixes to
+set it for a specific package type and/or package. Note that the order
+of precedence is the same as this list:
+
+- ``PACKAGE_ADD_METADATA_<PKGTYPE>:<PN>``
+
+- ``PACKAGE_ADD_METADATA_<PKGTYPE>``
+
+- ``PACKAGE_ADD_METADATA:<PN>``
+
+- :term:`PACKAGE_ADD_METADATA`
+
+`<PKGTYPE>` is a parameter and expected to be a distinct name of specific
+package type:
+
+- IPK for .ipk packages
+
+- DEB for .deb packages
+
+- RPM for .rpm packages
+
+`<PN>` is a parameter and expected to be a package name.
+
+The variable can contain multiple [one-line] metadata fields separated
+by the literal sequence '\\n'. The separator can be redefined using the
+variable flag ``separator``.
+
+Here is an example that adds two custom fields for ipk
+packages::
+
+ PACKAGE_ADD_METADATA_IPK = "Vendor: CustomIpk\nGroup:Applications/Spreadsheets"
+
diff --git a/documentation/dev-manual/prebuilt-libraries.rst b/documentation/dev-manual/prebuilt-libraries.rst
new file mode 100644
index 0000000000..a05f39ca1e
--- /dev/null
+++ b/documentation/dev-manual/prebuilt-libraries.rst
@@ -0,0 +1,209 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Working with Pre-Built Libraries
+********************************
+
+Introduction
+============
+
+Some library vendors do not release source code for their software but do
+release pre-built binaries. When shared libraries are built, they should
+be versioned (see `this article
+<https://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html>`__
+for some background), but sometimes this is not done.
+
+To summarize, a versioned library must meet two conditions:
+
+#. The filename must have the version appended, for example: ``libfoo.so.1.2.3``.
+#. The library must have the ELF tag ``SONAME`` set to the major version
+ of the library, for example: ``libfoo.so.1``. You can check this by
+ running ``readelf -d filename | grep SONAME``.
+
+This section shows how to deal with both versioned and unversioned
+pre-built libraries.
+
+Versioned Libraries
+===================
+
+In this example we work with pre-built libraries for the FT4222H USB I/O chip.
+Libraries are built for several target architecture variants and packaged in
+an archive as follows::
+
+ ├── build-arm-hisiv300
+ │   └── libft4222.so.1.4.4.44
+ ├── build-arm-v5-sf
+ │   └── libft4222.so.1.4.4.44
+ ├── build-arm-v6-hf
+ │   └── libft4222.so.1.4.4.44
+ ├── build-arm-v7-hf
+ │   └── libft4222.so.1.4.4.44
+ ├── build-arm-v8
+ │   └── libft4222.so.1.4.4.44
+ ├── build-i386
+ │   └── libft4222.so.1.4.4.44
+ ├── build-i486
+ │   └── libft4222.so.1.4.4.44
+ ├── build-mips-eglibc-hf
+ │   └── libft4222.so.1.4.4.44
+ ├── build-pentium
+ │   └── libft4222.so.1.4.4.44
+ ├── build-x86_64
+ │   └── libft4222.so.1.4.4.44
+ ├── examples
+ │   ├── get-version.c
+ │   ├── i2cm.c
+ │   ├── spim.c
+ │   └── spis.c
+ ├── ftd2xx.h
+ ├── install4222.sh
+ ├── libft4222.h
+ ├── ReadMe.txt
+ └── WinTypes.h
+
+To write a recipe to use such a library in your system:
+
+- The vendor will probably have a proprietary licence, so set
+ :term:`LICENSE_FLAGS` in your recipe.
+- The vendor provides a tarball containing libraries so set :term:`SRC_URI`
+ appropriately.
+- Set :term:`COMPATIBLE_HOST` so that the recipe cannot be used with an
+ unsupported architecture. In the following example, we only support the 32
+ and 64 bit variants of the ``x86`` architecture.
+- As the vendor provides versioned libraries, we can use ``oe_soinstall``
+ from :ref:`ref-classes-utils` to install the shared library and create
+ symbolic links. If the vendor does not do this, we need to follow the
+ non-versioned library guidelines in the next section.
+- As the vendor likely used :term:`LDFLAGS` different from those in your Yocto
+ Project build, disable the corresponding checks by adding ``ldflags``
+ to :term:`INSANE_SKIP`.
+- The vendor will typically ship release builds without debugging symbols.
+ Avoid errors by preventing the packaging task from stripping out the symbols
+ and adding them to a separate debug package. This is done by setting the
+ ``INHIBIT_`` flags shown below.
+
+The complete recipe would look like this::
+
+ SUMMARY = "FTDI FT4222H Library"
+ SECTION = "libs"
+ LICENSE_FLAGS = "ftdi"
+ LICENSE = "CLOSED"
+
+ COMPATIBLE_HOST = "(i.86|x86_64).*-linux"
+
+ # Sources available in a .tgz file in .zip archive
+ # at https://ftdichip.com/wp-content/uploads/2021/01/libft4222-linux-1.4.4.44.zip
+ # Found on https://ftdichip.com/software-examples/ft4222h-software-examples/
+ # Since dealing with this particular type of archive is out of topic here,
+ # we use a local link.
+ SRC_URI = "file://libft4222-linux-${PV}.tgz"
+
+ S = "${WORKDIR}"
+
+ ARCH_DIR:x86-64 = "build-x86_64"
+ ARCH_DIR:i586 = "build-i386"
+ ARCH_DIR:i686 = "build-i386"
+
+ INSANE_SKIP:${PN} = "ldflags"
+ INHIBIT_PACKAGE_STRIP = "1"
+ INHIBIT_SYSROOT_STRIP = "1"
+ INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
+
+ do_install () {
+ install -m 0755 -d ${D}${libdir}
+ oe_soinstall ${S}/${ARCH_DIR}/libft4222.so.${PV} ${D}${libdir}
+ install -d ${D}${includedir}
+ install -m 0755 ${S}/*.h ${D}${includedir}
+ }
+
+If the precompiled binaries are not statically linked and have dependencies on
+other libraries, then by adding those libraries to :term:`DEPENDS`, the linking
+can be examined and the appropriate :term:`RDEPENDS` automatically added.
+
+Non-Versioned Libraries
+=======================
+
+Some Background
+---------------
+
+Libraries in Linux systems are generally versioned so that it is possible
+to have multiple versions of the same library installed, which eases upgrades
+and support for older software. For example, suppose that in a versioned
+library, an actual library is called ``libfoo.so.1.2``, a symbolic link named
+``libfoo.so.1`` points to ``libfoo.so.1.2``, and a symbolic link named
+``libfoo.so`` points to ``libfoo.so.1.2``. Given these conditions, when you
+link a binary against a library, you typically provide the unversioned file
+name (i.e. ``-lfoo`` to the linker). However, the linker follows the symbolic
+link and actually links against the versioned filename. The unversioned symbolic
+link is only used at development time. Consequently, the library is packaged
+along with the headers in the development package ``${PN}-dev`` along with the
+actual library and versioned symbolic links in ``${PN}``. Because versioned
+libraries are far more common than unversioned libraries, the default packaging
+rules assume versioned libraries.
+
+Yocto Library Packaging Overview
+--------------------------------
+
+It follows that packaging an unversioned library requires a bit of work in the
+recipe. By default, ``libfoo.so`` gets packaged into ``${PN}-dev``, which
+triggers a QA warning that a non-symlink library is in a ``-dev`` package,
+and binaries in the same recipe link to the library in ``${PN}-dev``,
+which triggers more QA warnings. To solve this problem, you need to package the
+unversioned library into ``${PN}`` where it belongs. The abridged
+default :term:`FILES` variables in ``bitbake.conf`` are::
+
+ SOLIBS = ".so.*"
+ SOLIBSDEV = ".so"
+ FILES:${PN} = "... ${libdir}/lib*${SOLIBS} ..."
+ FILES_SOLIBSDEV ?= "... ${libdir}/lib*${SOLIBSDEV} ..."
+ FILES:${PN}-dev = "... ${FILES_SOLIBSDEV} ..."
+
+:term:`SOLIBS` defines a pattern that matches real shared object libraries.
+:term:`SOLIBSDEV` matches the development form (unversioned symlink). These two
+variables are then used in ``FILES:${PN}`` and ``FILES:${PN}-dev``, which puts
+the real libraries into ``${PN}`` and the unversioned symbolic link into ``${PN}-dev``.
+To package unversioned libraries, you need to modify the variables in the recipe
+as follows::
+
+ SOLIBS = ".so"
+ FILES_SOLIBSDEV = ""
+
+The modifications cause the ``.so`` file to be the real library
+and unset :term:`FILES_SOLIBSDEV` so that no libraries get packaged into
+``${PN}-dev``. The changes are required because unless :term:`PACKAGES` is changed,
+``${PN}-dev`` collects files before `${PN}`. ``${PN}-dev`` must not collect any of
+the files you want in ``${PN}``.
+
+Finally, loadable modules, essentially unversioned libraries that are linked
+at runtime using ``dlopen()`` instead of at build time, should generally be
+installed in a private directory. However, if they are installed in ``${libdir}``,
+then the modules can be treated as unversioned libraries.
+
+Example
+-------
+
+The example below installs an unversioned x86-64 pre-built library named
+``libfoo.so``. The :term:`COMPATIBLE_HOST` variable limits recipes to the
+x86-64 architecture while the :term:`INSANE_SKIP`, :term:`INHIBIT_PACKAGE_STRIP`
+and :term:`INHIBIT_SYSROOT_STRIP` variables are all set as in the above
+versioned library example. The "magic" is setting the :term:`SOLIBS` and
+:term:`FILES_SOLIBSDEV` variables as explained above::
+
+ SUMMARY = "libfoo sample recipe"
+ SECTION = "libs"
+ LICENSE = "CLOSED"
+
+ SRC_URI = "file://libfoo.so"
+
+ COMPATIBLE_HOST = "x86_64.*-linux"
+
+ INSANE_SKIP:${PN} = "ldflags"
+ INHIBIT_PACKAGE_STRIP = "1"
+ INHIBIT_SYSROOT_STRIP = "1"
+ SOLIBS = ".so"
+ FILES_SOLIBSDEV = ""
+
+ do_install () {
+ install -d ${D}${libdir}
+ install -m 0755 ${WORKDIR}/libfoo.so ${D}${libdir}
+ }
+
diff --git a/documentation/dev-manual/python-development-shell.rst b/documentation/dev-manual/python-development-shell.rst
new file mode 100644
index 0000000000..81a5c43472
--- /dev/null
+++ b/documentation/dev-manual/python-development-shell.rst
@@ -0,0 +1,50 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Using a Python Development Shell
+********************************
+
+Similar to working within a development shell as described in the
+previous section, you can also spawn and work within an interactive
+Python development shell. When debugging certain commands or even when
+just editing packages, ``pydevshell`` can be a useful tool. When you
+invoke the ``pydevshell`` task, all tasks up to and including
+:ref:`ref-tasks-patch` are run for the
+specified target. Then a new terminal is opened. Additionally, key
+Python objects and code are available in the same way they are to
+BitBake tasks, in particular, the data store 'd'. So, commands such as
+the following are useful when exploring the data store and running
+functions::
+
+ pydevshell> d.getVar("STAGING_DIR")
+ '/media/build1/poky/build/tmp/sysroots'
+ pydevshell> d.getVar("STAGING_DIR", False)
+ '${TMPDIR}/sysroots'
+ pydevshell> d.setVar("FOO", "bar")
+ pydevshell> d.getVar("FOO")
+ 'bar'
+ pydevshell> d.delVar("FOO")
+ pydevshell> d.getVar("FOO")
+ pydevshell> bb.build.exec_func("do_unpack", d)
+ pydevshell>
+
+See the ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:functions you can call from within python`"
+section in the BitBake User Manual for details about available functions.
+
+The commands execute just as if the OpenEmbedded build
+system were executing them. Consequently, working this way can be
+helpful when debugging a build or preparing software to be used with the
+OpenEmbedded build system.
+
+Here is an example that uses ``pydevshell`` on a target named
+``matchbox-desktop``::
+
+ $ bitbake matchbox-desktop -c pydevshell
+
+This command spawns a terminal and places you in an interactive Python
+interpreter within the OpenEmbedded build environment. The
+:term:`OE_TERMINAL` variable
+controls what type of shell is opened.
+
+When you are finished using ``pydevshell``, you can exit the shell
+either by using Ctrl+d or closing the terminal window.
+
diff --git a/documentation/dev-manual/qemu.rst b/documentation/dev-manual/qemu.rst
new file mode 100644
index 0000000000..19f3e40d63
--- /dev/null
+++ b/documentation/dev-manual/qemu.rst
@@ -0,0 +1,471 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+*******************************
+Using the Quick EMUlator (QEMU)
+*******************************
+
+The Yocto Project uses an implementation of the Quick EMUlator (QEMU)
+Open Source project as part of the Yocto Project development "tool set".
+This chapter provides both procedures that show you how to use the Quick
+EMUlator (QEMU) and other QEMU information helpful for development
+purposes.
+
+Overview
+========
+
+Within the context of the Yocto Project, QEMU is an emulator and
+virtualization machine that allows you to run a complete image you have
+built using the Yocto Project as just another task on your build system.
+QEMU is useful for running and testing images and applications on
+supported Yocto Project architectures without having actual hardware.
+Among other things, the Yocto Project uses QEMU to run automated Quality
+Assurance (QA) tests on final images shipped with each release.
+
+.. note::
+
+ This implementation is not the same as QEMU in general.
+
+This section provides a brief reference for the Yocto Project
+implementation of QEMU.
+
+For official information and documentation on QEMU in general, see the
+following references:
+
+- `QEMU Website <https://wiki.qemu.org/Main_Page>`__\ *:* The official
+ website for the QEMU Open Source project.
+
+- `Documentation <https://wiki.qemu.org/Manual>`__\ *:* The QEMU user
+ manual.
+
+Running QEMU
+============
+
+To use QEMU, you need to have QEMU installed and initialized as well as
+have the proper artifacts (i.e. image files and root filesystems)
+available. Follow these general steps to run QEMU:
+
+#. *Install QEMU:* QEMU is made available with the Yocto Project a
+ number of ways. One method is to install a Software Development Kit
+ (SDK). See ":ref:`sdk-manual/intro:the qemu emulator`" section in the
+ Yocto Project Application Development and the Extensible Software
+ Development Kit (eSDK) manual for information on how to install QEMU.
+
+#. *Setting Up the Environment:* How you set up the QEMU environment
+ depends on how you installed QEMU:
+
+ - If you cloned the ``poky`` repository or you downloaded and
+ unpacked a Yocto Project release tarball, you can source the build
+ environment script (i.e. :ref:`structure-core-script`)::
+
+ $ cd poky
+ $ source oe-init-build-env
+
+ - If you installed a cross-toolchain, you can run the script that
+ initializes the toolchain. For example, the following commands run
+ the initialization script from the default ``poky_sdk`` directory::
+
+ . poky_sdk/environment-setup-core2-64-poky-linux
+
+#. *Ensure the Artifacts are in Place:* You need to be sure you have a
+ pre-built kernel that will boot in QEMU. You also need the target
+ root filesystem for your target machine's architecture:
+
+ - If you have previously built an image for QEMU (e.g. ``qemux86``,
+ ``qemuarm``, and so forth), then the artifacts are in place in
+ your :term:`Build Directory`.
+
+ - If you have not built an image, you can go to the
+ :yocto_dl:`machines/qemu </releases/yocto/yocto-&DISTRO;/machines/qemu/>` area and download a
+ pre-built image that matches your architecture and can be run on
+ QEMU.
+
+ See the ":ref:`sdk-manual/appendix-obtain:extracting the root filesystem`"
+ section in the Yocto Project Application Development and the
+ Extensible Software Development Kit (eSDK) manual for information on
+ how to extract a root filesystem.
+
+#. *Run QEMU:* The basic ``runqemu`` command syntax is as follows::
+
+ $ runqemu [option ] [...]
+
+ Based on what you provide on the command
+ line, ``runqemu`` does a good job of figuring out what you are trying
+ to do. For example, by default, QEMU looks for the most recently
+ built image according to the timestamp when it needs to look for an
+ image. Minimally, through the use of options, you must provide either
+ a machine name, a virtual machine image (``*wic.vmdk``), or a kernel
+ image (``*.bin``).
+
+ Here are some additional examples to help illustrate further QEMU:
+
+ - This example starts QEMU with MACHINE set to "qemux86-64".
+ Assuming a standard :term:`Build Directory`, ``runqemu``
+ automatically finds the ``bzImage-qemux86-64.bin`` image file and
+ the ``core-image-minimal-qemux86-64-20200218002850.rootfs.ext4``
+ (assuming the current build created a ``core-image-minimal``
+ image)::
+
+ $ runqemu qemux86-64
+
+ .. note::
+
+ When more than one image with the same name exists, QEMU finds
+ and uses the most recently built image according to the
+ timestamp.
+
+ - This example produces the exact same results as the previous
+ example. This command, however, specifically provides the image
+ and root filesystem type::
+
+ $ runqemu qemux86-64 core-image-minimal ext4
+
+ - This example specifies to boot an :term:`Initramfs` image and to
+ enable audio in QEMU. For this case, ``runqemu`` sets the internal
+ variable ``FSTYPE`` to ``cpio.gz``. Also, for audio to be enabled,
+ an appropriate driver must be installed (see the ``audio`` option
+ in :ref:`dev-manual/qemu:\`\`runqemu\`\` command-line options`
+ for more information)::
+
+ $ runqemu qemux86-64 ramfs audio
+
+ - This example does not provide enough information for QEMU to
+ launch. While the command does provide a root filesystem type, it
+ must also minimally provide a `MACHINE`, `KERNEL`, or `VM` option::
+
+ $ runqemu ext4
+
+ - This example specifies to boot a virtual machine image
+ (``.wic.vmdk`` file). From the ``.wic.vmdk``, ``runqemu``
+ determines the QEMU architecture (`MACHINE`) to be "qemux86-64" and
+ the root filesystem type to be "vmdk"::
+
+ $ runqemu /home/scott-lenovo/vm/core-image-minimal-qemux86-64.wic.vmdk
+
+Switching Between Consoles
+==========================
+
+When booting or running QEMU, you can switch between supported consoles
+by using Ctrl+Alt+number. For example, Ctrl+Alt+3 switches you to the
+serial console as long as that console is enabled. Being able to switch
+consoles is helpful, for example, if the main QEMU console breaks for
+some reason.
+
+.. note::
+
+ Usually, "2" gets you to the main console and "3" gets you to the
+ serial console.
+
+Removing the Splash Screen
+==========================
+
+You can remove the splash screen when QEMU is booting by using Alt+left.
+Removing the splash screen allows you to see what is happening in the
+background.
+
+Disabling the Cursor Grab
+=========================
+
+The default QEMU integration captures the cursor within the main window.
+It does this since standard mouse devices only provide relative input
+and not absolute coordinates. You then have to break out of the grab
+using the "Ctrl+Alt" key combination. However, the Yocto Project's
+integration of QEMU enables the wacom USB touch pad driver by default to
+allow input of absolute coordinates. This default means that the mouse
+can enter and leave the main window without the grab taking effect
+leading to a better user experience.
+
+Running Under a Network File System (NFS) Server
+================================================
+
+One method for running QEMU is to run it on an NFS server. This is
+useful when you need to access the same file system from both the build
+and the emulated system at the same time. It is also worth noting that
+the system does not need root privileges to run. It uses a user space
+NFS server to avoid that. Follow these steps to set up for running QEMU
+using an NFS server.
+
+#. *Extract a Root Filesystem:* Once you are able to run QEMU in your
+ environment, you can use the ``runqemu-extract-sdk`` script, which is
+ located in the ``scripts`` directory along with the ``runqemu``
+ script.
+
+ The ``runqemu-extract-sdk`` takes a root filesystem tarball and
+ extracts it into a location that you specify. Here is an example that
+ takes a file system and extracts it to a directory named
+ ``test-nfs``:
+
+ .. code-block:: none
+
+ runqemu-extract-sdk ./tmp/deploy/images/qemux86-64/core-image-sato-qemux86-64.tar.bz2 test-nfs
+
+#. *Start QEMU:* Once you have extracted the file system, you can run
+ ``runqemu`` normally with the additional location of the file system.
+ You can then also make changes to the files within ``./test-nfs`` and
+ see those changes appear in the image in real time. Here is an
+ example using the ``qemux86`` image:
+
+ .. code-block:: none
+
+ runqemu qemux86-64 ./test-nfs
+
+.. note::
+
+ Should you need to start, stop, or restart the NFS share, you can use
+ the following commands:
+
+ - To start the NFS share::
+
+ runqemu-export-rootfs start file-system-location
+
+ - To stop the NFS share::
+
+ runqemu-export-rootfs stop file-system-location
+
+ - To restart the NFS share::
+
+ runqemu-export-rootfs restart file-system-location
+
+QEMU CPU Compatibility Under KVM
+================================
+
+By default, the QEMU build compiles for and targets 64-bit and x86 Intel
+Core2 Duo processors and 32-bit x86 Intel Pentium II processors. QEMU
+builds for and targets these CPU types because they display a broad
+range of CPU feature compatibility with many commonly used CPUs.
+
+Despite this broad range of compatibility, the CPUs could support a
+feature that your host CPU does not support. Although this situation is
+not a problem when QEMU uses software emulation of the feature, it can
+be a problem when QEMU is running with KVM enabled. Specifically,
+software compiled with a certain CPU feature crashes when run on a CPU
+under KVM that does not support that feature. To work around this
+problem, you can override QEMU's runtime CPU setting by changing the
+``QB_CPU_KVM`` variable in ``qemuboot.conf`` in the :term:`Build Directory`
+``deploy/image`` directory. This setting specifies a ``-cpu`` option passed
+into QEMU in the ``runqemu`` script. Running ``qemu -cpu help`` returns a
+list of available supported CPU types.
+
+QEMU Performance
+================
+
+Using QEMU to emulate your hardware can result in speed issues depending
+on the target and host architecture mix. For example, using the
+``qemux86`` image in the emulator on an Intel-based 32-bit (x86) host
+machine is fast because the target and host architectures match. On the
+other hand, using the ``qemuarm`` image on the same Intel-based host can
+be slower. But, you still achieve faithful emulation of ARM-specific
+issues.
+
+To speed things up, the QEMU images support using ``distcc`` to call a
+cross-compiler outside the emulated system. If you used ``runqemu`` to
+start QEMU, and the ``distccd`` application is present on the host
+system, any BitBake cross-compiling toolchain available from the build
+system is automatically used from within QEMU simply by calling
+``distcc``. You can accomplish this by defining the cross-compiler
+variable (e.g. ``export CC="distcc"``). Alternatively, if you are using
+a suitable SDK image or the appropriate stand-alone toolchain is
+present, the toolchain is also automatically used.
+
+.. note::
+
+ There are several mechanisms to connect to the system running
+ on the QEMU emulator:
+
+ - QEMU provides a framebuffer interface that makes standard consoles
+ available.
+
+ - Generally, headless embedded devices have a serial port. If so,
+ you can configure the operating system of the running image to use
+ that port to run a console. The connection uses standard IP
+ networking.
+
+ - SSH servers are available in some QEMU images. The ``core-image-sato``
+ QEMU image has a Dropbear secure shell (SSH) server that runs with
+ the root password disabled. The ``core-image-full-cmdline`` and
+ ``core-image-lsb`` QEMU images have OpenSSH instead of Dropbear.
+ Including these SSH servers allow you to use standard ``ssh`` and
+ ``scp`` commands. The ``core-image-minimal`` QEMU image, however,
+ contains no SSH server.
+
+ - You can use a provided, user-space NFS server to boot the QEMU
+ session using a local copy of the root filesystem on the host. In
+ order to make this connection, you must extract a root filesystem
+ tarball by using the ``runqemu-extract-sdk`` command. After
+ running the command, you must then point the ``runqemu`` script to
+ the extracted directory instead of a root filesystem image file.
+ See the
+ ":ref:`dev-manual/qemu:running under a network file system (nfs) server`"
+ section for more information.
+
+QEMU Command-Line Syntax
+========================
+
+The basic ``runqemu`` command syntax is as follows::
+
+ $ runqemu [option ] [...]
+
+Based on what you provide on the command line, ``runqemu`` does a
+good job of figuring out what you are trying to do. For example, by
+default, QEMU looks for the most recently built image according to the
+timestamp when it needs to look for an image. Minimally, through the use
+of options, you must provide either a machine name, a virtual machine
+image (``*wic.vmdk``), or a kernel image (``*.bin``).
+
+Here is the command-line help output for the ``runqemu`` command::
+
+ $ runqemu --help
+
+ Usage: you can run this script with any valid combination
+ of the following environment variables (in any order):
+ KERNEL - the kernel image file to use
+ ROOTFS - the rootfs image file or nfsroot directory to use
+ MACHINE - the machine name (optional, autodetected from KERNEL filename if unspecified)
+ Simplified QEMU command-line options can be passed with:
+ nographic - disable video console
+ serial - enable a serial console on /dev/ttyS0
+ slirp - enable user networking, no root privileges required
+ kvm - enable KVM when running x86/x86_64 (VT-capable CPU required)
+ kvm-vhost - enable KVM with vhost when running x86/x86_64 (VT-capable CPU required)
+ publicvnc - enable a VNC server open to all hosts
+ audio - enable audio
+ [*/]ovmf* - OVMF firmware file or base name for booting with UEFI
+ tcpserial=<port> - specify tcp serial port number
+ biosdir=<dir> - specify custom bios dir
+ biosfilename=<filename> - specify bios filename
+ qemuparams=<xyz> - specify custom parameters to QEMU
+ bootparams=<xyz> - specify custom kernel parameters during boot
+ help, -h, --help: print this text
+
+ Examples:
+ runqemu
+ runqemu qemuarm
+ runqemu tmp/deploy/images/qemuarm
+ runqemu tmp/deploy/images/qemux86/<qemuboot.conf>
+ runqemu qemux86-64 core-image-sato ext4
+ runqemu qemux86-64 wic-image-minimal wic
+ runqemu path/to/bzImage-qemux86.bin path/to/nfsrootdir/ serial
+ runqemu qemux86 iso/hddimg/wic.vmdk/wic.qcow2/wic.vdi/ramfs/cpio.gz...
+ runqemu qemux86 qemuparams="-m 256"
+ runqemu qemux86 bootparams="psplash=false"
+ runqemu path/to/<image>-<machine>.wic
+ runqemu path/to/<image>-<machine>.wic.vmdk
+
+``runqemu`` Command-Line Options
+================================
+
+Here is a description of ``runqemu`` options you can provide on the
+command line:
+
+.. note::
+
+ If you do provide some "illegal" option combination or perhaps you do
+ not provide enough in the way of options, ``runqemu``
+ provides appropriate error messaging to help you correct the problem.
+
+- `QEMUARCH`: The QEMU machine architecture, which must be "qemuarm",
+ "qemuarm64", "qemumips", "qemumips64", "qemuppc", "qemux86", or
+ "qemux86-64".
+
+- `VM`: The virtual machine image, which must be a ``.wic.vmdk``
+ file. Use this option when you want to boot a ``.wic.vmdk`` image.
+ The image filename you provide must contain one of the following
+ strings: "qemux86-64", "qemux86", "qemuarm", "qemumips64",
+ "qemumips", "qemuppc", or "qemush4".
+
+- `ROOTFS`: A root filesystem that has one of the following filetype
+ extensions: "ext2", "ext3", "ext4", "jffs2", "nfs", or "btrfs". If
+ the filename you provide for this option uses "nfs", it must provide
+ an explicit root filesystem path.
+
+- `KERNEL`: A kernel image, which is a ``.bin`` file. When you provide a
+ ``.bin`` file, ``runqemu`` detects it and assumes the file is a
+ kernel image.
+
+- `MACHINE`: The architecture of the QEMU machine, which must be one of
+ the following: "qemux86", "qemux86-64", "qemuarm", "qemuarm64",
+ "qemumips", "qemumips64", or "qemuppc". The MACHINE and QEMUARCH
+ options are basically identical. If you do not provide a MACHINE
+ option, ``runqemu`` tries to determine it based on other options.
+
+- ``ramfs``: Indicates you are booting an :term:`Initramfs`
+ image, which means the ``FSTYPE`` is ``cpio.gz``.
+
+- ``iso``: Indicates you are booting an ISO image, which means the
+ ``FSTYPE`` is ``.iso``.
+
+- ``nographic``: Disables the video console, which sets the console to
+ "ttys0". This option is useful when you have logged into a server and
+ you do not want to disable forwarding from the X Window System (X11)
+ to your workstation or laptop.
+
+- ``serial``: Enables a serial console on ``/dev/ttyS0``.
+
+- ``biosdir``: Establishes a custom directory for BIOS, VGA BIOS and
+ keymaps.
+
+- ``biosfilename``: Establishes a custom BIOS name.
+
+- ``qemuparams=\"xyz\"``: Specifies custom QEMU parameters. Use this
+ option to pass options other than the simple "kvm" and "serial"
+ options.
+
+- ``bootparams=\"xyz\"``: Specifies custom boot parameters for the
+ kernel.
+
+- ``audio``: Enables audio in QEMU. The MACHINE option must be either
+ "qemux86" or "qemux86-64" in order for audio to be enabled.
+ Additionally, the ``snd_intel8x0`` or ``snd_ens1370`` driver must be
+ installed in linux guest.
+
+- ``slirp``: Enables "slirp" networking, which is a different way of
+ networking that does not need root access but also is not as easy to
+ use or comprehensive as the default.
+
+ Using ``slirp`` by default will forward the guest machine's
+ 22 and 23 TCP ports to host machine's 2222 and 2323 ports
+ (or the next free ports). Specific forwarding rules can be configured
+ by setting ``QB_SLIRP_OPT`` as environment variable or in ``qemuboot.conf``
+ in the :term:`Build Directory` ``deploy/image`` directory.
+ Examples::
+
+ QB_SLIRP_OPT="-netdev user,id=net0,hostfwd=tcp::8080-:80"
+
+ QB_SLIRP_OPT="-netdev user,id=net0,hostfwd=tcp::8080-:80,hostfwd=tcp::2222-:22"
+
+ The first example forwards TCP port 80 from the emulated system to
+ port 8080 (or the next free port) on the host system,
+ allowing access to an http server running in QEMU from
+ ``http://<host ip>:8080/``.
+
+ The second example does the same, but also forwards TCP port 22 on the
+ guest system to 2222 (or the next free port) on the host system,
+ allowing ssh access to the emulated system using
+ ``ssh -P 2222 <user>@<host ip>``.
+
+ Keep in mind that proper configuration of firewall software is required.
+
+- ``kvm``: Enables KVM when running "qemux86" or "qemux86-64" QEMU
+ architectures. For KVM to work, all the following conditions must be
+ met:
+
+ - Your MACHINE must be either qemux86" or "qemux86-64".
+
+ - Your build host has to have the KVM modules installed, which are
+ ``/dev/kvm``.
+
+ - The build host ``/dev/kvm`` directory has to be both writable and
+ readable.
+
+- ``kvm-vhost``: Enables KVM with VHOST support when running "qemux86"
+ or "qemux86-64" QEMU architectures. For KVM with VHOST to work, the
+ following conditions must be met:
+
+ - ``kvm`` option conditions defined above must be met.
+
+ - Your build host has to have virtio net device, which are
+ ``/dev/vhost-net``.
+
+ - The build host ``/dev/vhost-net`` directory has to be either
+ readable or writable and "slirp-enabled".
+
+- ``publicvnc``: Enables a VNC server open to all hosts.
diff --git a/documentation/dev-manual/quilt.rst b/documentation/dev-manual/quilt.rst
new file mode 100644
index 0000000000..59240705ad
--- /dev/null
+++ b/documentation/dev-manual/quilt.rst
@@ -0,0 +1,89 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Using Quilt in Your Workflow
+****************************
+
+`Quilt <https://savannah.nongnu.org/projects/quilt>`__ is a powerful tool
+that allows you to capture source code changes without having a clean
+source tree. This section outlines the typical workflow you can use to
+modify source code, test changes, and then preserve the changes in the
+form of a patch all using Quilt.
+
+.. note::
+
+ With regard to preserving changes to source files, if you clean a
+ recipe or have :ref:`ref-classes-rm-work` enabled, the
+ :ref:`devtool workflow <sdk-manual/extensible:using \`\`devtool\`\` in your sdk workflow>`
+ as described in the Yocto Project Application Development and the
+ Extensible Software Development Kit (eSDK) manual is a safer
+ development flow than the flow that uses Quilt.
+
+Follow these general steps:
+
+#. *Find the Source Code:* Temporary source code used by the
+ OpenEmbedded build system is kept in the :term:`Build Directory`. See the
+ ":ref:`dev-manual/temporary-source-code:finding temporary source code`" section to
+ learn how to locate the directory that has the temporary source code for a
+ particular package.
+
+#. *Change Your Working Directory:* You need to be in the directory that
+ has the temporary source code. That directory is defined by the
+ :term:`S` variable.
+
+#. *Create a New Patch:* Before modifying source code, you need to
+ create a new patch. To create a new patch file, use ``quilt new`` as
+ below::
+
+ $ quilt new my_changes.patch
+
+#. *Notify Quilt and Add Files:* After creating the patch, you need to
+ notify Quilt about the files you plan to edit. You notify Quilt by
+ adding the files to the patch you just created::
+
+ $ quilt add file1.c file2.c file3.c
+
+#. *Edit the Files:* Make your changes in the source code to the files
+ you added to the patch.
+
+#. *Test Your Changes:* Once you have modified the source code, the
+ easiest way to test your changes is by calling the :ref:`ref-tasks-compile`
+ task as shown in the following example::
+
+ $ bitbake -c compile -f package
+
+ The ``-f`` or ``--force`` option forces the specified task to
+ execute. If you find problems with your code, you can just keep
+ editing and re-testing iteratively until things work as expected.
+
+ .. note::
+
+ All the modifications you make to the temporary source code disappear
+ once you run the :ref:`ref-tasks-clean` or :ref:`ref-tasks-cleanall`
+ tasks using BitBake (i.e. ``bitbake -c clean package`` and
+ ``bitbake -c cleanall package``). Modifications will also disappear if
+ you use the :ref:`ref-classes-rm-work` feature as described in
+ the ":ref:`dev-manual/disk-space:conserving disk space during builds`"
+ section.
+
+#. *Generate the Patch:* Once your changes work as expected, you need to
+ use Quilt to generate the final patch that contains all your
+ modifications::
+
+ $ quilt refresh
+
+ At this point, the
+ ``my_changes.patch`` file has all your edits made to the ``file1.c``,
+ ``file2.c``, and ``file3.c`` files.
+
+ You can find the resulting patch file in the ``patches/``
+ subdirectory of the source (:term:`S`) directory.
+
+#. *Copy the Patch File:* For simplicity, copy the patch file into a
+ directory named ``files``, which you can create in the same directory
+ that holds the recipe (``.bb``) file or the append (``.bbappend``)
+ file. Placing the patch here guarantees that the OpenEmbedded build
+ system will find the patch. Next, add the patch into the :term:`SRC_URI`
+ of the recipe. Here is an example::
+
+ SRC_URI += "file://my_changes.patch"
+
diff --git a/documentation/dev-manual/read-only-rootfs.rst b/documentation/dev-manual/read-only-rootfs.rst
new file mode 100644
index 0000000000..251178ed54
--- /dev/null
+++ b/documentation/dev-manual/read-only-rootfs.rst
@@ -0,0 +1,89 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Creating a Read-Only Root Filesystem
+************************************
+
+Suppose, for security reasons, you need to disable your target device's
+root filesystem's write permissions (i.e. you need a read-only root
+filesystem). Or, perhaps you are running the device's operating system
+from a read-only storage device. For either case, you can customize your
+image for that behavior.
+
+.. note::
+
+ Supporting a read-only root filesystem requires that the system and
+ applications do not try to write to the root filesystem. You must
+ configure all parts of the target system to write elsewhere, or to
+ gracefully fail in the event of attempting to write to the root
+ filesystem.
+
+Creating the Root Filesystem
+============================
+
+To create the read-only root filesystem, simply add the
+"read-only-rootfs" feature to your image, normally in one of two ways.
+The first way is to add the "read-only-rootfs" image feature in the
+image's recipe file via the :term:`IMAGE_FEATURES` variable::
+
+ IMAGE_FEATURES += "read-only-rootfs"
+
+As an alternative, you can add the same feature
+from within your :term:`Build Directory`'s ``local.conf`` file with the
+associated :term:`EXTRA_IMAGE_FEATURES` variable, as in::
+
+ EXTRA_IMAGE_FEATURES = "read-only-rootfs"
+
+For more information on how to use these variables, see the
+":ref:`dev-manual/customizing-images:Customizing Images Using Custom \`\`IMAGE_FEATURES\`\` and \`\`EXTRA_IMAGE_FEATURES\`\``"
+section. For information on the variables, see
+:term:`IMAGE_FEATURES` and
+:term:`EXTRA_IMAGE_FEATURES`.
+
+Post-Installation Scripts and Read-Only Root Filesystem
+=======================================================
+
+It is very important that you make sure all post-Installation
+(``pkg_postinst``) scripts for packages that are installed into the
+image can be run at the time when the root filesystem is created during
+the build on the host system. These scripts cannot attempt to run during
+the first boot on the target device. With the "read-only-rootfs" feature
+enabled, the build system makes sure that all post-installation scripts
+succeed at file system creation time. If any of these scripts
+still need to be run after the root filesystem is created, the build
+immediately fails. These build-time checks ensure that the build fails
+rather than the target device fails later during its initial boot
+operation.
+
+Most of the common post-installation scripts generated by the build
+system for the out-of-the-box Yocto Project are engineered so that they
+can run during root filesystem creation (e.g. post-installation scripts
+for caching fonts). However, if you create and add custom scripts, you
+need to be sure they can be run during this file system creation.
+
+Here are some common problems that prevent post-installation scripts
+from running during root filesystem creation:
+
+- *Not using $D in front of absolute paths:* The build system defines
+ ``$``\ :term:`D` when the root
+ filesystem is created. Furthermore, ``$D`` is blank when the script
+ is run on the target device. This implies two purposes for ``$D``:
+ ensuring paths are valid in both the host and target environments,
+ and checking to determine which environment is being used as a method
+ for taking appropriate actions.
+
+- *Attempting to run processes that are specific to or dependent on the
+ target architecture:* You can work around these attempts by using
+ native tools, which run on the host system, to accomplish the same
+ tasks, or by alternatively running the processes under QEMU, which
+ has the ``qemu_run_binary`` function. For more information, see the
+ :ref:`ref-classes-qemu` class.
+
+Areas With Write Access
+=======================
+
+With the "read-only-rootfs" feature enabled, any attempt by the target
+to write to the root filesystem at runtime fails. Consequently, you must
+make sure that you configure processes and applications that attempt
+these types of writes do so to directories with write access (e.g.
+``/tmp`` or ``/var/run``).
+
diff --git a/documentation/dev-manual/runtime-testing.rst b/documentation/dev-manual/runtime-testing.rst
new file mode 100644
index 0000000000..7a2b42f25a
--- /dev/null
+++ b/documentation/dev-manual/runtime-testing.rst
@@ -0,0 +1,594 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Performing Automated Runtime Testing
+************************************
+
+The OpenEmbedded build system makes available a series of automated
+tests for images to verify runtime functionality. You can run these
+tests on either QEMU or actual target hardware. Tests are written in
+Python making use of the ``unittest`` module, and the majority of them
+run commands on the target system over SSH. This section describes how
+you set up the environment to use these tests, run available tests, and
+write and add your own tests.
+
+For information on the test and QA infrastructure available within the
+Yocto Project, see the ":ref:`ref-manual/release-process:testing and quality assurance`"
+section in the Yocto Project Reference Manual.
+
+Enabling Tests
+==============
+
+Depending on whether you are planning to run tests using QEMU or on the
+hardware, you have to take different steps to enable the tests. See the
+following subsections for information on how to enable both types of
+tests.
+
+Enabling Runtime Tests on QEMU
+------------------------------
+
+In order to run tests, you need to do the following:
+
+- *Set up to avoid interaction with sudo for networking:* To
+ accomplish this, you must do one of the following:
+
+ - Add ``NOPASSWD`` for your user in ``/etc/sudoers`` either for all
+ commands or just for ``runqemu-ifup``. You must provide the full
+ path as that can change if you are using multiple clones of the
+ source repository.
+
+ .. note::
+
+ On some distributions, you also need to comment out "Defaults
+ requiretty" in ``/etc/sudoers``.
+
+ - Manually configure a tap interface for your system.
+
+ - Run as root the script in ``scripts/runqemu-gen-tapdevs``, which
+ should generate a list of tap devices. This is the option
+ typically chosen for Autobuilder-type environments.
+
+ .. note::
+
+ - Be sure to use an absolute path when calling this script
+ with sudo.
+
+ - Ensure that your host has the package ``iptables`` installed.
+
+ - The package recipe ``qemu-helper-native`` is required to run
+ this script. Build the package using the following command::
+
+ $ bitbake qemu-helper-native
+
+- *Set the DISPLAY variable:* You need to set this variable so that
+ you have an X server available (e.g. start ``vncserver`` for a
+ headless machine).
+
+- *Be sure your host's firewall accepts incoming connections from
+ 192.168.7.0/24:* Some of the tests (in particular DNF tests) start an
+ HTTP server on a random high number port, which is used to serve
+ files to the target. The DNF module serves
+ ``${WORKDIR}/oe-rootfs-repo`` so it can run DNF channel commands.
+ That means your host's firewall must accept incoming connections from
+ 192.168.7.0/24, which is the default IP range used for tap devices by
+ ``runqemu``.
+
+- *Be sure your host has the correct packages installed:* Depending
+ your host's distribution, you need to have the following packages
+ installed:
+
+ - Ubuntu and Debian: ``sysstat`` and ``iproute2``
+
+ - openSUSE: ``sysstat`` and ``iproute2``
+
+ - Fedora: ``sysstat`` and ``iproute``
+
+ - CentOS: ``sysstat`` and ``iproute``
+
+Once you start running the tests, the following happens:
+
+#. A copy of the root filesystem is written to ``${WORKDIR}/testimage``.
+
+#. The image is booted under QEMU using the standard ``runqemu`` script.
+
+#. A default timeout of 500 seconds occurs to allow for the boot process
+ to reach the login prompt. You can change the timeout period by
+ setting
+ :term:`TEST_QEMUBOOT_TIMEOUT`
+ in the ``local.conf`` file.
+
+#. Once the boot process is reached and the login prompt appears, the
+ tests run. The full boot log is written to
+ ``${WORKDIR}/testimage/qemu_boot_log``.
+
+#. Each test module loads in the order found in :term:`TEST_SUITES`. You can
+ find the full output of the commands run over SSH in
+ ``${WORKDIR}/testimgage/ssh_target_log``.
+
+#. If no failures occur, the task running the tests ends successfully.
+ You can find the output from the ``unittest`` in the task log at
+ ``${WORKDIR}/temp/log.do_testimage``.
+
+Enabling Runtime Tests on Hardware
+----------------------------------
+
+The OpenEmbedded build system can run tests on real hardware, and for
+certain devices it can also deploy the image to be tested onto the
+device beforehand.
+
+For automated deployment, a "controller image" is installed onto the
+hardware once as part of setup. Then, each time tests are to be run, the
+following occurs:
+
+#. The controller image is booted into and used to write the image to be
+ tested to a second partition.
+
+#. The device is then rebooted using an external script that you need to
+ provide.
+
+#. The device boots into the image to be tested.
+
+When running tests (independent of whether the image has been deployed
+automatically or not), the device is expected to be connected to a
+network on a pre-determined IP address. You can either use static IP
+addresses written into the image, or set the image to use DHCP and have
+your DHCP server on the test network assign a known IP address based on
+the MAC address of the device.
+
+In order to run tests on hardware, you need to set :term:`TEST_TARGET` to an
+appropriate value. For QEMU, you do not have to change anything, the
+default value is "qemu". For running tests on hardware, the following
+options are available:
+
+- *"simpleremote":* Choose "simpleremote" if you are going to run tests
+ on a target system that is already running the image to be tested and
+ is available on the network. You can use "simpleremote" in
+ conjunction with either real hardware or an image running within a
+ separately started QEMU or any other virtual machine manager.
+
+- *"SystemdbootTarget":* Choose "SystemdbootTarget" if your hardware is
+ an EFI-based machine with ``systemd-boot`` as bootloader and
+ ``core-image-testmaster`` (or something similar) is installed. Also,
+ your hardware under test must be in a DHCP-enabled network that gives
+ it the same IP address for each reboot.
+
+ If you choose "SystemdbootTarget", there are additional requirements
+ and considerations. See the
+ ":ref:`dev-manual/runtime-testing:selecting systemdboottarget`" section, which
+ follows, for more information.
+
+- *"BeagleBoneTarget":* Choose "BeagleBoneTarget" if you are deploying
+ images and running tests on the BeagleBone "Black" or original
+ "White" hardware. For information on how to use these tests, see the
+ comments at the top of the BeagleBoneTarget
+ ``meta-yocto-bsp/lib/oeqa/controllers/beaglebonetarget.py`` file.
+
+- *"GrubTarget":* Choose "GrubTarget" if you are deploying images and running
+ tests on any generic PC that boots using GRUB. For information on how
+ to use these tests, see the comments at the top of the GrubTarget
+ ``meta-yocto-bsp/lib/oeqa/controllers/grubtarget.py`` file.
+
+- *"your-target":* Create your own custom target if you want to run
+ tests when you are deploying images and running tests on a custom
+ machine within your BSP layer. To do this, you need to add a Python
+ unit that defines the target class under ``lib/oeqa/controllers/``
+ within your layer. You must also provide an empty ``__init__.py``.
+ For examples, see files in ``meta-yocto-bsp/lib/oeqa/controllers/``.
+
+Selecting SystemdbootTarget
+---------------------------
+
+If you did not set :term:`TEST_TARGET` to "SystemdbootTarget", then you do
+not need any information in this section. You can skip down to the
+":ref:`dev-manual/runtime-testing:running tests`" section.
+
+If you did set :term:`TEST_TARGET` to "SystemdbootTarget", you also need to
+perform a one-time setup of your controller image by doing the following:
+
+#. *Set EFI_PROVIDER:* Be sure that :term:`EFI_PROVIDER` is as follows::
+
+ EFI_PROVIDER = "systemd-boot"
+
+#. *Build the controller image:* Build the ``core-image-testmaster`` image.
+ The ``core-image-testmaster`` recipe is provided as an example for a
+ "controller" image and you can customize the image recipe as you would
+ any other recipe.
+
+ Image recipe requirements are:
+
+ - Inherits ``core-image`` so that kernel modules are installed.
+
+ - Installs normal linux utilities not BusyBox ones (e.g. ``bash``,
+ ``coreutils``, ``tar``, ``gzip``, and ``kmod``).
+
+ - Uses a custom :term:`Initramfs` image with a custom
+ installer. A normal image that you can install usually creates a
+ single root filesystem partition. This image uses another installer that
+ creates a specific partition layout. Not all Board Support
+ Packages (BSPs) can use an installer. For such cases, you need to
+ manually create the following partition layout on the target:
+
+ - First partition mounted under ``/boot``, labeled "boot".
+
+ - The main root filesystem partition where this image gets installed,
+ which is mounted under ``/``.
+
+ - Another partition labeled "testrootfs" where test images get
+ deployed.
+
+#. *Install image:* Install the image that you just built on the target
+ system.
+
+The final thing you need to do when setting :term:`TEST_TARGET` to
+"SystemdbootTarget" is to set up the test image:
+
+#. *Set up your local.conf file:* Make sure you have the following
+ statements in your ``local.conf`` file::
+
+ IMAGE_FSTYPES += "tar.gz"
+ IMAGE_CLASSES += "testimage"
+ TEST_TARGET = "SystemdbootTarget"
+ TEST_TARGET_IP = "192.168.2.3"
+
+#. *Build your test image:* Use BitBake to build the image::
+
+ $ bitbake core-image-sato
+
+Power Control
+-------------
+
+For most hardware targets other than "simpleremote", you can control
+power:
+
+- You can use :term:`TEST_POWERCONTROL_CMD` together with
+ :term:`TEST_POWERCONTROL_EXTRA_ARGS` as a command that runs on the host
+ and does power cycling. The test code passes one argument to that
+ command: off, on or cycle (off then on). Here is an example that
+ could appear in your ``local.conf`` file::
+
+ TEST_POWERCONTROL_CMD = "powercontrol.exp test 10.11.12.1 nuc1"
+
+ In this example, the expect
+ script does the following:
+
+ .. code-block:: shell
+
+ ssh test@10.11.12.1 "pyctl nuc1 arg"
+
+ It then runs a Python script that controls power for a label called
+ ``nuc1``.
+
+ .. note::
+
+ You need to customize :term:`TEST_POWERCONTROL_CMD` and
+ :term:`TEST_POWERCONTROL_EXTRA_ARGS` for your own setup. The one requirement
+ is that it accepts "on", "off", and "cycle" as the last argument.
+
+- When no command is defined, it connects to the device over SSH and
+ uses the classic reboot command to reboot the device. Classic reboot
+ is fine as long as the machine actually reboots (i.e. the SSH test
+ has not failed). It is useful for scenarios where you have a simple
+ setup, typically with a single board, and where some manual
+ interaction is okay from time to time.
+
+If you have no hardware to automatically perform power control but still
+wish to experiment with automated hardware testing, you can use the
+``dialog-power-control`` script that shows a dialog prompting you to perform
+the required power action. This script requires either KDialog or Zenity
+to be installed. To use this script, set the
+:term:`TEST_POWERCONTROL_CMD`
+variable as follows::
+
+ TEST_POWERCONTROL_CMD = "${COREBASE}/scripts/contrib/dialog-power-control"
+
+Serial Console Connection
+-------------------------
+
+For test target classes requiring a serial console to interact with the
+bootloader (e.g. BeagleBoneTarget and GrubTarget),
+you need to specify a command to use to connect to the serial console of
+the target machine by using the
+:term:`TEST_SERIALCONTROL_CMD`
+variable and optionally the
+:term:`TEST_SERIALCONTROL_EXTRA_ARGS`
+variable.
+
+These cases could be a serial terminal program if the machine is
+connected to a local serial port, or a ``telnet`` or ``ssh`` command
+connecting to a remote console server. Regardless of the case, the
+command simply needs to connect to the serial console and forward that
+connection to standard input and output as any normal terminal program
+does. For example, to use the picocom terminal program on serial device
+``/dev/ttyUSB0`` at 115200bps, you would set the variable as follows::
+
+ TEST_SERIALCONTROL_CMD = "picocom /dev/ttyUSB0 -b 115200"
+
+For local
+devices where the serial port device disappears when the device reboots,
+an additional "serdevtry" wrapper script is provided. To use this
+wrapper, simply prefix the terminal command with
+``${COREBASE}/scripts/contrib/serdevtry``::
+
+ TEST_SERIALCONTROL_CMD = "${COREBASE}/scripts/contrib/serdevtry picocom -b 115200 /dev/ttyUSB0"
+
+Running Tests
+=============
+
+You can start the tests automatically or manually:
+
+- *Automatically running tests:* To run the tests automatically after the
+ OpenEmbedded build system successfully creates an image, first set the
+ :term:`TESTIMAGE_AUTO` variable to "1" in your ``local.conf`` file in the
+ :term:`Build Directory`::
+
+ TESTIMAGE_AUTO = "1"
+
+ Next, build your image. If the image successfully builds, the
+ tests run::
+
+ bitbake core-image-sato
+
+- *Manually running tests:* To manually run the tests, first globally
+ inherit the :ref:`ref-classes-testimage` class by editing your
+ ``local.conf`` file::
+
+ IMAGE_CLASSES += "testimage"
+
+ Next, use BitBake to run the tests::
+
+ bitbake -c testimage image
+
+All test files reside in ``meta/lib/oeqa/runtime/cases`` in the
+:term:`Source Directory`. A test name maps
+directly to a Python module. Each test module may contain a number of
+individual tests. Tests are usually grouped together by the area tested
+(e.g tests for systemd reside in ``meta/lib/oeqa/runtime/cases/systemd.py``).
+
+You can add tests to any layer provided you place them in the proper
+area and you extend :term:`BBPATH` in
+the ``local.conf`` file as normal. Be sure that tests reside in
+``layer/lib/oeqa/runtime/cases``.
+
+.. note::
+
+ Be sure that module names do not collide with module names used in
+ the default set of test modules in ``meta/lib/oeqa/runtime/cases``.
+
+You can change the set of tests run by appending or overriding
+:term:`TEST_SUITES` variable in
+``local.conf``. Each name in :term:`TEST_SUITES` represents a required test
+for the image. Test modules named within :term:`TEST_SUITES` cannot be
+skipped even if a test is not suitable for an image (e.g. running the
+RPM tests on an image without ``rpm``). Appending "auto" to
+:term:`TEST_SUITES` causes the build system to try to run all tests that are
+suitable for the image (i.e. each test module may elect to skip itself).
+
+The order you list tests in :term:`TEST_SUITES` is important and influences
+test dependencies. Consequently, tests that depend on other tests should
+be added after the test on which they depend. For example, since the
+``ssh`` test depends on the ``ping`` test, "ssh" needs to come after
+"ping" in the list. The test class provides no re-ordering or dependency
+handling.
+
+.. note::
+
+ Each module can have multiple classes with multiple test methods.
+ And, Python ``unittest`` rules apply.
+
+Here are some things to keep in mind when running tests:
+
+- The default tests for the image are defined as::
+
+ DEFAULT_TEST_SUITES:pn-image = "ping ssh df connman syslog xorg scp vnc date rpm dnf dmesg"
+
+- Add your own test to the list of the by using the following::
+
+ TEST_SUITES:append = " mytest"
+
+- Run a specific list of tests as follows::
+
+ TEST_SUITES = "test1 test2 test3"
+
+ Remember, order is important. Be sure to place a test that is
+ dependent on another test later in the order.
+
+Exporting Tests
+===============
+
+You can export tests so that they can run independently of the build
+system. Exporting tests is required if you want to be able to hand the
+test execution off to a scheduler. You can only export tests that are
+defined in :term:`TEST_SUITES`.
+
+If your image is already built, make sure the following are set in your
+``local.conf`` file::
+
+ INHERIT += "testexport"
+ TEST_TARGET_IP = "IP-address-for-the-test-target"
+ TEST_SERVER_IP = "IP-address-for-the-test-server"
+
+You can then export the tests with the
+following BitBake command form::
+
+ $ bitbake image -c testexport
+
+Exporting the tests places them in the :term:`Build Directory` in
+``tmp/testexport/``\ image, which is controlled by the :term:`TEST_EXPORT_DIR`
+variable.
+
+You can now run the tests outside of the build environment::
+
+ $ cd tmp/testexport/image
+ $ ./runexported.py testdata.json
+
+Here is a complete example that shows IP addresses and uses the
+``core-image-sato`` image::
+
+ INHERIT += "testexport"
+ TEST_TARGET_IP = "192.168.7.2"
+ TEST_SERVER_IP = "192.168.7.1"
+
+Use BitBake to export the tests::
+
+ $ bitbake core-image-sato -c testexport
+
+Run the tests outside of
+the build environment using the following::
+
+ $ cd tmp/testexport/core-image-sato
+ $ ./runexported.py testdata.json
+
+Writing New Tests
+=================
+
+As mentioned previously, all new test files need to be in the proper
+place for the build system to find them. New tests for additional
+functionality outside of the core should be added to the layer that adds
+the functionality, in ``layer/lib/oeqa/runtime/cases`` (as long as
+:term:`BBPATH` is extended in the
+layer's ``layer.conf`` file as normal). Just remember the following:
+
+- Filenames need to map directly to test (module) names.
+
+- Do not use module names that collide with existing core tests.
+
+- Minimally, an empty ``__init__.py`` file must be present in the runtime
+ directory.
+
+To create a new test, start by copying an existing module (e.g.
+``oe_syslog.py`` or ``gcc.py`` are good ones to use). Test modules can use
+code from ``meta/lib/oeqa/utils``, which are helper classes.
+
+.. note::
+
+ Structure shell commands such that you rely on them and they return a
+ single code for success. Be aware that sometimes you will need to
+ parse the output. See the ``df.py`` and ``date.py`` modules for examples.
+
+You will notice that all test classes inherit ``oeRuntimeTest``, which
+is found in ``meta/lib/oetest.py``. This base class offers some helper
+attributes, which are described in the following sections:
+
+Class Methods
+-------------
+
+Class methods are as follows:
+
+- *hasPackage(pkg):* Returns "True" if ``pkg`` is in the installed
+ package list of the image, which is based on the manifest file that
+ is generated during the :ref:`ref-tasks-rootfs` task.
+
+- *hasFeature(feature):* Returns "True" if the feature is in
+ :term:`IMAGE_FEATURES` or
+ :term:`DISTRO_FEATURES`.
+
+Class Attributes
+----------------
+
+Class attributes are as follows:
+
+- *pscmd:* Equals "ps -ef" if ``procps`` is installed in the image.
+ Otherwise, ``pscmd`` equals "ps" (busybox).
+
+- *tc:* The called test context, which gives access to the
+ following attributes:
+
+ - *d:* The BitBake datastore, which allows you to use stuff such
+ as ``oeRuntimeTest.tc.d.getVar("VIRTUAL-RUNTIME_init_manager")``.
+
+ - *testslist and testsrequired:* Used internally. The tests
+ do not need these.
+
+ - *filesdir:* The absolute path to
+ ``meta/lib/oeqa/runtime/files``, which contains helper files for
+ tests meant for copying on the target such as small files written
+ in C for compilation.
+
+ - *target:* The target controller object used to deploy and
+ start an image on a particular target (e.g. Qemu, SimpleRemote,
+ and SystemdbootTarget). Tests usually use the following:
+
+ - *ip:* The target's IP address.
+
+ - *server_ip:* The host's IP address, which is usually used
+ by the DNF test suite.
+
+ - *run(cmd, timeout=None):* The single, most used method.
+ This command is a wrapper for: ``ssh root@host "cmd"``. The
+ command returns a tuple: (status, output), which are what their
+ names imply - the return code of "cmd" and whatever output it
+ produces. The optional timeout argument represents the number
+ of seconds the test should wait for "cmd" to return. If the
+ argument is "None", the test uses the default instance's
+ timeout period, which is 300 seconds. If the argument is "0",
+ the test runs until the command returns.
+
+ - *copy_to(localpath, remotepath):*
+ ``scp localpath root@ip:remotepath``.
+
+ - *copy_from(remotepath, localpath):*
+ ``scp root@host:remotepath localpath``.
+
+Instance Attributes
+-------------------
+
+There is a single instance attribute, which is ``target``. The ``target``
+instance attribute is identical to the class attribute of the same name,
+which is described in the previous section. This attribute exists as
+both an instance and class attribute so tests can use
+``self.target.run(cmd)`` in instance methods instead of
+``oeRuntimeTest.tc.target.run(cmd)``.
+
+Installing Packages in the DUT Without the Package Manager
+==========================================================
+
+When a test requires a package built by BitBake, it is possible to
+install that package. Installing the package does not require a package
+manager be installed in the device under test (DUT). It does, however,
+require an SSH connection and the target must be using the
+``sshcontrol`` class.
+
+.. note::
+
+ This method uses ``scp`` to copy files from the host to the target, which
+ causes permissions and special attributes to be lost.
+
+A JSON file is used to define the packages needed by a test. This file
+must be in the same path as the file used to define the tests.
+Furthermore, the filename must map directly to the test module name with
+a ``.json`` extension.
+
+The JSON file must include an object with the test name as keys of an
+object or an array. This object (or array of objects) uses the following
+data:
+
+- "pkg" --- a mandatory string that is the name of the package to be
+ installed.
+
+- "rm" --- an optional boolean, which defaults to "false", that specifies
+ to remove the package after the test.
+
+- "extract" --- an optional boolean, which defaults to "false", that
+ specifies if the package must be extracted from the package format.
+ When set to "true", the package is not automatically installed into
+ the DUT.
+
+Here is an example JSON file that handles test "foo" installing
+package "bar" and test "foobar" installing packages "foo" and "bar".
+Once the test is complete, the packages are removed from the DUT::
+
+ {
+ "foo": {
+ "pkg": "bar"
+ },
+ "foobar": [
+ {
+ "pkg": "foo",
+ "rm": true
+ },
+ {
+ "pkg": "bar",
+ "rm": true
+ }
+ ]
+ }
+
diff --git a/documentation/dev-manual/sbom.rst b/documentation/dev-manual/sbom.rst
new file mode 100644
index 0000000000..b72bad1554
--- /dev/null
+++ b/documentation/dev-manual/sbom.rst
@@ -0,0 +1,83 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Creating a Software Bill of Materials
+*************************************
+
+Once you are able to build an image for your project, once the licenses for
+each software component are all identified (see
+":ref:`dev-manual/licenses:working with licenses`") and once vulnerability
+fixes are applied (see ":ref:`dev-manual/vulnerabilities:checking
+for vulnerabilities`"), the OpenEmbedded build system can generate
+a description of all the components you used, their licenses, their dependencies,
+their sources, the changes that were applied to them and the known
+vulnerabilities that were fixed.
+
+This description is generated in the form of a *Software Bill of Materials*
+(:term:`SBOM`), using the :term:`SPDX` standard.
+
+When you release software, this is the most standard way to provide information
+about the Software Supply Chain of your software image and SDK. The
+:term:`SBOM` tooling is often used to ensure open source license compliance by
+providing the license texts used in the product which legal departments and end
+users can read in standardized format.
+
+:term:`SBOM` information is also critical to performing vulnerability exposure
+assessments, as all the components used in the Software Supply Chain are listed.
+
+The OpenEmbedded build system doesn't generate such information by default.
+To make this happen, you must inherit the
+:ref:`ref-classes-create-spdx` class from a configuration file::
+
+ INHERIT += "create-spdx"
+
+Upon building an image, you will then get:
+
+- :term:`SPDX` output in JSON format as an ``IMAGE-MACHINE.spdx.json`` file in
+ ``tmp/deploy/images/MACHINE/`` inside the :term:`Build Directory`.
+
+- This toplevel file is accompanied by an ``IMAGE-MACHINE.spdx.index.json``
+ containing an index of JSON :term:`SPDX` files for individual recipes.
+
+- The compressed archive ``IMAGE-MACHINE.spdx.tar.zst`` contains the index
+ and the files for the single recipes.
+
+The :ref:`ref-classes-create-spdx` class offers options to include
+more information in the output :term:`SPDX` data:
+
+- Make the json files more human readable by setting (:term:`SPDX_PRETTY`).
+
+- Add compressed archives of the files in the generated target packages by
+ setting (:term:`SPDX_ARCHIVE_PACKAGED`).
+
+- Add a description of the source files used to generate host tools and target
+ packages (:term:`SPDX_INCLUDE_SOURCES`)
+
+- Add archives of these source files themselves (:term:`SPDX_ARCHIVE_SOURCES`).
+
+Though the toplevel :term:`SPDX` output is available in
+``tmp/deploy/images/MACHINE/`` inside the :term:`Build Directory`, ancillary
+generated files are available in ``tmp/deploy/spdx/MACHINE`` too, such as:
+
+- The individual :term:`SPDX` JSON files in the ``IMAGE-MACHINE.spdx.tar.zst``
+ archive.
+
+- Compressed archives of the files in the generated target packages,
+ in ``packages/packagename.tar.zst`` (when :term:`SPDX_ARCHIVE_PACKAGED`
+ is set).
+
+- Compressed archives of the source files used to build the host tools
+ and the target packages in ``recipes/recipe-packagename.tar.zst``
+ (when :term:`SPDX_ARCHIVE_SOURCES` is set). Those are needed to fulfill
+ "source code access" license requirements.
+
+See also the :term:`SPDX_CUSTOM_ANNOTATION_VARS` variable which allows
+to associate custom notes to a recipe.
+See the `tools page <https://spdx.dev/resources/tools/>`__ on the :term:`SPDX`
+project website for a list of tools to consume and transform the :term:`SPDX`
+data generated by the OpenEmbedded build system.
+
+See also Joshua Watt's presentations
+`Automated SBoM generation with OpenEmbedded and the Yocto Project <https://youtu.be/Q5UQUM6zxVU>`__
+at FOSDEM 2023 and
+`SPDX in the Yocto Project <https://fosdem.org/2024/schedule/event/fosdem-2024-3318-spdx-in-the-yocto-project/>`__
+at FOSDEM 2024.
diff --git a/documentation/dev-manual/securing-images.rst b/documentation/dev-manual/securing-images.rst
new file mode 100644
index 0000000000..e5791d3d6d
--- /dev/null
+++ b/documentation/dev-manual/securing-images.rst
@@ -0,0 +1,156 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Making Images More Secure
+*************************
+
+Security is of increasing concern for embedded devices. Consider the
+issues and problems discussed in just this sampling of work found across
+the Internet:
+
+- *"*\ `Security Risks of Embedded
+ Systems <https://www.schneier.com/blog/archives/2014/01/security_risks_9.html>`__\ *"*
+ by Bruce Schneier
+
+- *"*\ `Internet Census
+ 2012 <http://census2012.sourceforge.net/paper.html>`__\ *"* by Carna
+ Botnet
+
+- *"*\ `Security Issues for Embedded
+ Devices <https://elinux.org/images/6/6f/Security-issues.pdf>`__\ *"*
+ by Jake Edge
+
+When securing your image is of concern, there are steps, tools, and
+variables that you can consider to help you reach the security goals you
+need for your particular device. Not all situations are identical when
+it comes to making an image secure. Consequently, this section provides
+some guidance and suggestions for consideration when you want to make
+your image more secure.
+
+.. note::
+
+ Because the security requirements and risks are different for every
+ type of device, this section cannot provide a complete reference on
+ securing your custom OS. It is strongly recommended that you also
+ consult other sources of information on embedded Linux system
+ hardening and on security.
+
+General Considerations
+======================
+
+There are general considerations that help you create more secure images.
+You should consider the following suggestions to make your device
+more secure:
+
+- Scan additional code you are adding to the system (e.g. application
+ code) by using static analysis tools. Look for buffer overflows and
+ other potential security problems.
+
+- Pay particular attention to the security for any web-based
+ administration interface.
+
+ Web interfaces typically need to perform administrative functions and
+ tend to need to run with elevated privileges. Thus, the consequences
+ resulting from the interface's security becoming compromised can be
+ serious. Look for common web vulnerabilities such as
+ cross-site-scripting (XSS), unvalidated inputs, and so forth.
+
+ As with system passwords, the default credentials for accessing a
+ web-based interface should not be the same across all devices. This
+ is particularly true if the interface is enabled by default as it can
+ be assumed that many end-users will not change the credentials.
+
+- Ensure you can update the software on the device to mitigate
+ vulnerabilities discovered in the future. This consideration
+ especially applies when your device is network-enabled.
+
+- Regularly scan and apply fixes for CVE security issues affecting
+ all software components in the product, see ":ref:`dev-manual/vulnerabilities:checking for vulnerabilities`".
+
+- Regularly update your version of Poky and OE-Core from their upstream
+ developers, e.g. to apply updates and security fixes from stable
+ and :term:`LTS` branches.
+
+- Ensure you remove or disable debugging functionality before producing
+ the final image. For information on how to do this, see the
+ ":ref:`dev-manual/securing-images:considerations specific to the openembedded build system`"
+ section.
+
+- Ensure you have no network services listening that are not needed.
+
+- Remove any software from the image that is not needed.
+
+- Enable hardware support for secure boot functionality when your
+ device supports this functionality.
+
+Security Flags
+==============
+
+The Yocto Project has security flags that you can enable that help make
+your build output more secure. The security flags are in the
+``meta/conf/distro/include/security_flags.inc`` file in your
+:term:`Source Directory` (e.g. ``poky``).
+
+.. note::
+
+ Depending on the recipe, certain security flags are enabled and
+ disabled by default.
+
+Use the following line in your ``local.conf`` file or in your custom
+distribution configuration file to enable the security compiler and
+linker flags for your build::
+
+ require conf/distro/include/security_flags.inc
+
+Considerations Specific to the OpenEmbedded Build System
+========================================================
+
+You can take some steps that are specific to the OpenEmbedded build
+system to make your images more secure:
+
+- Ensure "debug-tweaks" is not one of your selected
+ :term:`IMAGE_FEATURES`.
+ When creating a new project, the default is to provide you with an
+ initial ``local.conf`` file that enables this feature using the
+ :term:`EXTRA_IMAGE_FEATURES`
+ variable with the line::
+
+ EXTRA_IMAGE_FEATURES = "debug-tweaks"
+
+ To disable that feature, simply comment out that line in your
+ ``local.conf`` file, or make sure :term:`IMAGE_FEATURES` does not contain
+ "debug-tweaks" before producing your final image. Among other things,
+ leaving this in place sets the root password as blank, which makes
+ logging in for debugging or inspection easy during development but
+ also means anyone can easily log in during production.
+
+- It is possible to set a root password for the image and also to set
+ passwords for any extra users you might add (e.g. administrative or
+ service type users). When you set up passwords for multiple images or
+ users, you should not duplicate passwords.
+
+ To set up passwords, use the :ref:`ref-classes-extrausers` class, which
+ is the preferred method. For an example on how to set up both root and
+ user passwords, see the ":ref:`ref-classes-extrausers`" section.
+
+ .. note::
+
+ When adding extra user accounts or setting a root password, be
+ cautious about setting the same password on every device. If you
+ do this, and the password you have set is exposed, then every
+ device is now potentially compromised. If you need this access but
+ want to ensure security, consider setting a different, random
+ password for each device. Typically, you do this as a separate
+ step after you deploy the image onto the device.
+
+- Consider enabling a Mandatory Access Control (MAC) framework such as
+ SMACK or SELinux and tuning it appropriately for your device's usage.
+ You can find more information in the
+ :yocto_git:`meta-selinux </meta-selinux/>` layer.
+
+Tools for Hardening Your Image
+==============================
+
+The Yocto Project provides tools for making your image more secure. You
+can find these tools in the ``meta-security`` layer of the
+:yocto_git:`Yocto Project Source Repositories <>`.
+
diff --git a/documentation/dev-manual/security-subjects.rst b/documentation/dev-manual/security-subjects.rst
new file mode 100644
index 0000000000..1b02b6a9e9
--- /dev/null
+++ b/documentation/dev-manual/security-subjects.rst
@@ -0,0 +1,189 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Dealing with Vulnerability Reports
+**********************************
+
+The Yocto Project and OpenEmbedded are open-source, community-based projects
+used in numerous products. They assemble multiple other open-source projects,
+and need to handle security issues and practices both internal (in the code
+maintained by both projects), and external (maintained by other projects and
+organizations).
+
+This manual assembles security-related information concerning the whole
+ecosystem. It includes information on reporting a potential security issue,
+the operation of the YP Security team and how to contribute in the
+related code. It is written to be useful for both security researchers and
+YP developers.
+
+How to report a potential security vulnerability?
+=================================================
+
+If you would like to report a public issue (for example, one with a released
+CVE number), please report it using the
+:yocto_bugs:`Security Bugzilla </enter_bug.cgi?product=Security>`.
+
+If you are dealing with a not-yet-released issue, or an urgent one, please send
+a message to security AT yoctoproject DOT org, including as many details as
+possible: the layer or software module affected, the recipe and its version,
+and any example code, if available. This mailing list is monitored by the
+Yocto Project Security team.
+
+For each layer, you might also look for specific instructions (if any) for
+reporting potential security issues in the specific ``SECURITY.md`` file at the
+root of the repository. Instructions on how and where submit a patch are
+usually available in ``README.md``. If this is your first patch to the
+Yocto Project/OpenEmbedded, you might want to have a look into the
+Contributor's Manual section
+":ref:`contributor-guide/submit-changes:preparing changes for submission`".
+
+Branches maintained with security fixes
+---------------------------------------
+
+See the
+:ref:`Release process <ref-manual/release-process:Stable Release Process>`
+documentation for details regarding the policies and maintenance of stable
+branches.
+
+The :yocto_wiki:`Releases page </Releases>` contains a list
+of all releases of the Yocto Project. Versions in gray are no longer actively
+maintained with security patches, but well-tested patches may still be accepted
+for them for significant issues.
+
+Security-related discussions at the Yocto Project
+-------------------------------------------------
+
+We have set up two security-related mailing lists:
+
+ - Public List: yocto [dash] security [at] yoctoproject[dot] org
+
+ This is a public mailing list for anyone to subscribe to. This list is an
+ open list to discuss public security issues/patches and security-related
+ initiatives. For more information, including subscription information,
+ please see the :yocto_lists:`yocto-security mailing list info page </g/yocto-security>`.
+
+ - Private List: security [at] yoctoproject [dot] org
+
+ This is a private mailing list for reporting non-published potential
+ vulnerabilities. The list is monitored by the Yocto Project Security team.
+
+
+What you should do if you find a security vulnerability
+-------------------------------------------------------
+
+If you find a security flaw: a crash, an information leakage, or anything that
+can have a security impact if exploited in any Open Source software built or
+used by the Yocto Project, please report this to the Yocto Project Security
+Team. If you prefer to contact the upstream project directly, please send a
+copy to the security team at the Yocto Project as well. If you believe this is
+highly sensitive information, please report the vulnerability in a secure way,
+i.e. encrypt the email and send it to the private list. This ensures that
+the exploit is not leaked and exploited before a response/fix has been generated.
+
+Security team
+=============
+
+The Yocto Project/OpenEmbedded security team coordinates the work on security
+subjects in the project. All general discussion takes place publicly. The
+Security Team only uses confidential communication tools to deal with private
+vulnerability reports before they are released.
+
+Security team appointment
+-------------------------
+
+The Yocto Project Security Team consists of at least three members. When new
+members are needed, the Yocto Project Technical Steering Committee (YP TSC)
+asks for nominations by public channels including a nomination deadline.
+Self-nominations are possible. When the limit time is
+reached, the YP TSC posts the list of candidates for the comments of project
+participants and developers. Comments may be sent publicly or privately to the
+YP and OE TSCs. The candidates are approved by both YP TSC and OpenEmbedded
+Technical Steering Committee (OE TSC) and the final list of the team members
+is announced publicly. The aim is to have people representing technical
+leadership, security knowledge and infrastructure present with enough people
+to provide backup/coverage but keep the notification list small enough to
+minimize information risk and maintain trust.
+
+YP Security Team members may resign at any time.
+
+Security Team Operations
+------------------------
+
+The work of the Security Team might require high confidentiality. Team members
+are individuals selected by merit and do not represent the companies they work
+for. They do not share information about confidential issues outside of the team
+and do not hint about ongoing embargoes.
+
+Team members can bring in domain experts as needed. Those people should be
+added to individual issues only and adhere to the same standards as the YP
+Security Team.
+
+The YP security team organizes its meetings and communication as needed.
+
+When the YP Security team receives a report about a potential security
+vulnerability, they quickly analyze and notify the reporter of the result.
+They might also request more information.
+
+If the issue is confirmed and affects the code maintained by the YP, they
+confidentially notify maintainers of that code and work with them to prepare
+a fix.
+
+If the issue is confirmed and affects an upstream project, the YP security team
+notifies the project. Usually, the upstream project analyzes the problem again.
+If they deem it a real security problem in their software, they develop and
+release a fix following their security policy. They may want to include the
+original reporter in the loop. There is also sometimes some coordination for
+handling patches, backporting patches etc, or just understanding the problem
+or what caused it.
+
+When the fix is publicly available, the YP security team member or the
+package maintainer sends patches against the YP code base, following usual
+procedures, including public code review.
+
+What Yocto Security Team does when it receives a security vulnerability
+-----------------------------------------------------------------------
+
+The YP Security Team team performs a quick analysis and would usually report
+the flaw to the upstream project. Normally the upstream project analyzes the
+problem. If they deem it a real security problem in their software, they
+develop and release a fix following their own security policy. They may want
+to include the original reporter in the loop. There is also sometimes some
+coordination for handling patches, backporting patches etc, or just
+understanding the problem or what caused it.
+
+The security policy of the upstream project might include a notification to
+Linux distributions or other important downstream projects in advance to
+discuss coordinated disclosure. These mailing lists are normally non-public.
+
+When the upstream project releases a version with the fix, they are responsible
+for contacting `Mitre <https://www.cve.org/>`__ to get a CVE number assigned and
+the CVE record published.
+
+If an upstream project does not respond quickly
+-----------------------------------------------
+
+If an upstream project does not fix the problem in a reasonable time,
+the Yocto's Security Team will contact other interested parties (usually
+other distributions) in the community and together try to solve the
+vulnerability as quickly as possible.
+
+The Yocto Project Security team adheres to the 90 days disclosure policy
+by default. An increase of the embargo time is possible when necessary.
+
+Current Security Team members
+-----------------------------
+
+For secure communications, please send your messages encrypted using the GPG
+keys. Remember, message headers are not encrypted so do not include sensitive
+information in the subject line.
+
+ - Ross Burton: <ross@burtonini.com> `Public key <https://keys.openpgp.org/search?q=ross%40burtonini.com>`__
+
+ - Michael Halstead: <mhalstead [at] linuxfoundation [dot] org>
+ `Public key <https://pgp.mit.edu/pks/lookup?op=vindex&search=0x3373170601861969>`__
+ or `Public key <https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xd1f2407285e571ed12a407a73373170601861969>`__
+
+ - Richard Purdie: <richard.purdie@linuxfoundation.org> `Public key <https://keys.openpgp.org/search?q=richard.purdie%40linuxfoundation.org>`__
+
+ - Marta Rybczynska: <marta DOT rybczynska [at] syslinbit [dot] com> `Public key <https://keys.openpgp.org/search?q=marta.rybczynska@syslinbit.com>`__
+
+ - Steve Sakoman: <steve [at] sakoman [dot] com> `Public key <https://keys.openpgp.org/search?q=steve%40sakoman.com>`__
diff --git a/documentation/dev-manual/speeding-up-build.rst b/documentation/dev-manual/speeding-up-build.rst
new file mode 100644
index 0000000000..6e0d7873ac
--- /dev/null
+++ b/documentation/dev-manual/speeding-up-build.rst
@@ -0,0 +1,109 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Speeding Up a Build
+*******************
+
+Build time can be an issue. By default, the build system uses simple
+controls to try and maximize build efficiency. In general, the default
+settings for all the following variables result in the most efficient
+build times when dealing with single socket systems (i.e. a single CPU).
+If you have multiple CPUs, you might try increasing the default values
+to gain more speed. See the descriptions in the glossary for each
+variable for more information:
+
+- :term:`BB_NUMBER_THREADS`:
+ The maximum number of threads BitBake simultaneously executes.
+
+- :term:`BB_NUMBER_PARSE_THREADS`:
+ The number of threads BitBake uses during parsing.
+
+- :term:`PARALLEL_MAKE`: Extra
+ options passed to the ``make`` command during the
+ :ref:`ref-tasks-compile` task in
+ order to specify parallel compilation on the local build host.
+
+- :term:`PARALLEL_MAKEINST`:
+ Extra options passed to the ``make`` command during the
+ :ref:`ref-tasks-install` task in
+ order to specify parallel installation on the local build host.
+
+As mentioned, these variables all scale to the number of processor cores
+available on the build system. For single socket systems, this
+auto-scaling ensures that the build system fundamentally takes advantage
+of potential parallel operations during the build based on the build
+machine's capabilities.
+
+Additional factors that can affect build speed are:
+
+- File system type: The file system type that the build is being
+ performed on can also influence performance. Using ``ext4`` is
+ recommended as compared to ``ext2`` and ``ext3`` due to ``ext4``
+ improved features such as extents.
+
+- Disabling the updating of access time using ``noatime``: The
+ ``noatime`` mount option prevents the build system from updating file
+ and directory access times.
+
+- Setting a longer commit: Using the "commit=" mount option increases
+ the interval in seconds between disk cache writes. Changing this
+ interval from the five second default to something longer increases
+ the risk of data loss but decreases the need to write to the disk,
+ thus increasing the build performance.
+
+- Choosing the packaging backend: Of the available packaging backends,
+ IPK is the fastest. Additionally, selecting a singular packaging
+ backend also helps.
+
+- Using ``tmpfs`` for :term:`TMPDIR`
+ as a temporary file system: While this can help speed up the build,
+ the benefits are limited due to the compiler using ``-pipe``. The
+ build system goes to some lengths to avoid ``sync()`` calls into the
+ file system on the principle that if there was a significant failure,
+ the :term:`Build Directory` contents could easily be rebuilt.
+
+- Inheriting the :ref:`ref-classes-rm-work` class:
+ Inheriting this class has shown to speed up builds due to
+ significantly lower amounts of data stored in the data cache as well
+ as on disk. Inheriting this class also makes cleanup of
+ :term:`TMPDIR` faster, at the
+ expense of being easily able to dive into the source code. File
+ system maintainers have recommended that the fastest way to clean up
+ large numbers of files is to reformat partitions rather than delete
+ files due to the linear nature of partitions. This, of course,
+ assumes you structure the disk partitions and file systems in a way
+ that this is practical.
+
+Aside from the previous list, you should keep some trade offs in mind
+that can help you speed up the build:
+
+- Remove items from
+ :term:`DISTRO_FEATURES`
+ that you might not need.
+
+- Exclude debug symbols and other debug information: If you do not need
+ these symbols and other debug information, disabling the ``*-dbg``
+ package generation can speed up the build. You can disable this
+ generation by setting the
+ :term:`INHIBIT_PACKAGE_DEBUG_SPLIT`
+ variable to "1".
+
+- Disable static library generation for recipes derived from
+ ``autoconf`` or ``libtool``: Here is an example showing how to
+ disable static libraries and still provide an override to handle
+ exceptions::
+
+ STATICLIBCONF = "--disable-static"
+ STATICLIBCONF:sqlite3-native = ""
+ EXTRA_OECONF += "${STATICLIBCONF}"
+
+ .. note::
+
+ - Some recipes need static libraries in order to work correctly
+ (e.g. ``pseudo-native`` needs ``sqlite3-native``). Overrides,
+ as in the previous example, account for these kinds of
+ exceptions.
+
+ - Some packages have packaging code that assumes the presence of
+ the static libraries. If so, you might need to exclude them as
+ well.
+
diff --git a/documentation/dev-manual/start.rst b/documentation/dev-manual/start.rst
new file mode 100644
index 0000000000..386e5f5d29
--- /dev/null
+++ b/documentation/dev-manual/start.rst
@@ -0,0 +1,855 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+***********************************
+Setting Up to Use the Yocto Project
+***********************************
+
+This chapter provides guidance on how to prepare to use the Yocto
+Project. You can learn about creating a team environment to develop
+using the Yocto Project, how to set up a :ref:`build
+host <dev-manual/start:preparing the build host>`, how to locate
+Yocto Project source repositories, and how to create local Git
+repositories.
+
+Creating a Team Development Environment
+=======================================
+
+It might not be immediately clear how you can use the Yocto Project in a
+team development environment, or how to scale it for a large team of
+developers. You can adapt the Yocto Project to many different use cases
+and scenarios; however, this flexibility could cause difficulties if you
+are trying to create a working setup that scales effectively.
+
+To help you understand how to set up this type of environment, this
+section presents a procedure that gives you information that can help
+you get the results you want. The procedure is high-level and presents
+some of the project's most successful experiences, practices, solutions,
+and available technologies that have proved to work well in the past;
+however, keep in mind, the procedure here is simply a starting point.
+You can build off these steps and customize the procedure to fit any
+particular working environment and set of practices.
+
+#. *Determine Who is Going to be Developing:* You first need to
+ understand who is going to be doing anything related to the Yocto
+ Project and determine their roles. Making this determination is
+ essential to completing subsequent steps, which are to get your
+ equipment together and set up your development environment's
+ hardware topology.
+
+ Possible roles are:
+
+ - *Application Developer:* This type of developer does application
+ level work on top of an existing software stack.
+
+ - *Core System Developer:* This type of developer works on the
+ contents of the operating system image itself.
+
+ - *Build Engineer:* This type of developer manages Autobuilders and
+ releases. Depending on the specifics of the environment, not all
+ situations might need a Build Engineer.
+
+ - *Test Engineer:* This type of developer creates and manages
+ automated tests that are used to ensure all application and core
+ system development meets desired quality standards.
+
+#. *Gather the Hardware:* Based on the size and make-up of the team,
+ get the hardware together. Ideally, any development, build, or test
+ engineer uses a system that runs a supported Linux distribution.
+ These systems, in general, should be high performance (e.g. dual,
+ six-core Xeons with 24 Gbytes of RAM and plenty of disk space). You
+ can help ensure efficiency by having any machines used for testing
+ or that run Autobuilders be as high performance as possible.
+
+ .. note::
+
+ Given sufficient processing power, you might also consider
+ building Yocto Project development containers to be run under
+ Docker, which is described later.
+
+#. *Understand the Hardware Topology of the Environment:* Once you
+ understand the hardware involved and the make-up of the team, you
+ can understand the hardware topology of the development environment.
+ You can get a visual idea of the machines and their roles across the
+ development environment.
+
+#. *Use Git as Your Source Control Manager (SCM):* Keeping your
+ :term:`Metadata` (i.e. recipes,
+ configuration files, classes, and so forth) and any software you are
+ developing under the control of an SCM system that is compatible
+ with the OpenEmbedded build system is advisable. Of all of the SCMs
+ supported by BitBake, the Yocto Project team strongly recommends using
+ :ref:`overview-manual/development-environment:git`.
+ Git is a distributed system
+ that is easy to back up, allows you to work remotely, and then
+ connects back to the infrastructure.
+
+ .. note::
+
+ For information about BitBake, see the
+ :doc:`bitbake:index`.
+
+ It is relatively easy to set up Git services and create infrastructure like
+ :yocto_git:`/`, which is based on server software called
+ `Gitolite <https://gitolite.com>`__
+ with `cgit <https://git.zx2c4.com/cgit/about/>`__ being used to
+ generate the web interface that lets you view the repositories.
+ ``gitolite`` identifies users using SSH keys and allows
+ branch-based access controls to repositories that you can control as
+ little or as much as necessary.
+
+#. *Set up the Application Development Machines:* As mentioned earlier,
+ application developers are creating applications on top of existing
+ software stacks. Here are some best practices for setting up
+ machines used for application development:
+
+ - Use a pre-built toolchain that contains the software stack
+ itself. Then, develop the application code on top of the stack.
+ This method works well for small numbers of relatively isolated
+ applications.
+
+ - Keep your cross-development toolchains updated. You can do this
+ through provisioning either as new toolchain downloads or as
+ updates through a package update mechanism using ``opkg`` to
+ provide updates to an existing toolchain. The exact mechanics of
+ how and when to do this depend on local policy.
+
+ - Use multiple toolchains installed locally into different
+ locations to allow development across versions.
+
+#. *Set up the Core Development Machines:* As mentioned earlier, core
+ developers work on the contents of the operating system itself.
+ Here are some best practices for setting up machines used for
+ developing images:
+
+ - Have the :term:`OpenEmbedded Build System` available on
+ the developer workstations so developers can run their own builds
+ and directly rebuild the software stack.
+
+ - Keep the core system unchanged as much as possible and do your
+ work in layers on top of the core system. Doing so gives you a
+ greater level of portability when upgrading to new versions of
+ the core system or Board Support Packages (BSPs).
+
+ - Share layers amongst the developers of a particular project and
+ contain the policy configuration that defines the project.
+
+#. *Set up an Autobuilder:* Autobuilders are often the core of the
+ development environment. It is here that changes from individual
+ developers are brought together and centrally tested. Based on this
+ automated build and test environment, subsequent decisions about
+ releases can be made. Autobuilders also allow for "continuous
+ integration" style testing of software components and regression
+ identification and tracking.
+
+ See ":yocto_ab:`Yocto Project Autobuilder <>`" for more
+ information and links to buildbot. The Yocto Project team has found
+ this implementation works well in this role. A public example of
+ this is the Yocto Project Autobuilders, which the Yocto Project team
+ uses to test the overall health of the project.
+
+ The features of this system are:
+
+ - Highlights when commits break the build.
+
+ - Populates an :ref:`sstate
+ cache <overview-manual/concepts:shared state cache>` from which
+ developers can pull rather than requiring local builds.
+
+ - Allows commit hook triggers, which trigger builds when commits
+ are made.
+
+ - Allows triggering of automated image booting and testing under
+ the QuickEMUlator (QEMU).
+
+ - Supports incremental build testing and from-scratch builds.
+
+ - Shares output that allows developer testing and historical
+ regression investigation.
+
+ - Creates output that can be used for releases.
+
+ - Allows scheduling of builds so that resources can be used
+ efficiently.
+
+#. *Set up Test Machines:* Use a small number of shared, high
+ performance systems for testing purposes. Developers can use these
+ systems for wider, more extensive testing while they continue to
+ develop locally using their primary development system.
+
+#. *Document Policies and Change Flow:* The Yocto Project uses a
+ hierarchical structure and a pull model. There are scripts to create and
+ send pull requests (i.e. ``create-pull-request`` and
+ ``send-pull-request``). This model is in line with other open source
+ projects where maintainers are responsible for specific areas of the
+ project and a single maintainer handles the final "top-of-tree"
+ merges.
+
+ .. note::
+
+ You can also use a more collective push model. The ``gitolite``
+ software supports both the push and pull models quite easily.
+
+ As with any development environment, it is important to document the
+ policy used as well as any main project guidelines so they are
+ understood by everyone. It is also a good idea to have
+ well-structured commit messages, which are usually a part of a
+ project's guidelines. Good commit messages are essential when
+ looking back in time and trying to understand why changes were made.
+
+ If you discover that changes are needed to the core layer of the
+ project, it is worth sharing those with the community as soon as
+ possible. Chances are if you have discovered the need for changes,
+ someone else in the community needs them also.
+
+#. *Development Environment Summary:* Aside from the previous steps,
+ here are best practices within the Yocto Project development
+ environment:
+
+ - Use :ref:`overview-manual/development-environment:git` as the source control
+ system.
+
+ - Maintain your Metadata in layers that make sense for your
+ situation. See the ":ref:`overview-manual/yp-intro:the yocto project layer model`"
+ section in the Yocto Project Overview and Concepts Manual and the
+ ":ref:`dev-manual/layers:understanding and creating layers`"
+ section for more information on layers.
+
+ - Separate the project's Metadata and code by using separate Git
+ repositories. See the ":ref:`overview-manual/development-environment:yocto project source repositories`"
+ section in the Yocto Project Overview and Concepts Manual for
+ information on these repositories. See the
+ ":ref:`dev-manual/start:locating yocto project source files`"
+ section for information on how to set up local Git repositories
+ for related upstream Yocto Project Git repositories.
+
+ - Set up the directory for the shared state cache
+ (:term:`SSTATE_DIR`) where
+ it makes sense. For example, set up the sstate cache on a system
+ used by developers in the same organization and share the same
+ source directories on their machines.
+
+ - Set up an Autobuilder and have it populate the sstate cache and
+ source directories.
+
+ - The Yocto Project community encourages you to send patches to the
+ project to fix bugs or add features. If you do submit patches,
+ follow the project commit guidelines for writing good commit
+ messages. See the ":doc:`../contributor-guide/submit-changes`"
+ section in the Yocto Project and OpenEmbedded Contributor Guide.
+
+ - Send changes to the core sooner than later as others are likely
+ to run into the same issues. For some guidance on mailing lists
+ to use, see the lists in the
+ ":ref:`contributor-guide/submit-changes:finding a suitable mailing list`"
+ section. For a description
+ of the available mailing lists, see the ":ref:`resources-mailinglist`" section in
+ the Yocto Project Reference Manual.
+
+Preparing the Build Host
+========================
+
+This section provides procedures to set up a system to be used as your
+:term:`Build Host` for
+development using the Yocto Project. Your build host can be a native
+Linux machine (recommended), it can be a machine (Linux, Mac, or
+Windows) that uses `CROPS <https://github.com/crops/poky-container>`__,
+which leverages `Docker Containers <https://www.docker.com/>`__ or it
+can be a Windows machine capable of running version 2 of Windows Subsystem
+For Linux (WSL 2).
+
+.. note::
+
+ The Yocto Project is not compatible with version 1 of
+ :wikipedia:`Windows Subsystem for Linux <Windows_Subsystem_for_Linux>`.
+ It is compatible but neither officially supported nor validated with
+ WSL 2. If you still decide to use WSL please upgrade to
+ `WSL 2 <https://learn.microsoft.com/en-us/windows/wsl/install>`__.
+
+Once your build host is set up to use the Yocto Project, further steps
+are necessary depending on what you want to accomplish. See the
+following references for information on how to prepare for Board Support
+Package (BSP) development and kernel development:
+
+- *BSP Development:* See the ":ref:`bsp-guide/bsp:preparing your build host to work with bsp layers`"
+ section in the Yocto Project Board Support Package (BSP) Developer's
+ Guide.
+
+- *Kernel Development:* See the ":ref:`kernel-dev/common:preparing the build host to work on the kernel`"
+ section in the Yocto Project Linux Kernel Development Manual.
+
+Setting Up a Native Linux Host
+------------------------------
+
+Follow these steps to prepare a native Linux machine as your Yocto
+Project Build Host:
+
+#. *Use a Supported Linux Distribution:* You should have a reasonably
+ current Linux-based host system. You will have the best results with
+ a recent release of Fedora, openSUSE, Debian, Ubuntu, RHEL or CentOS
+ as these releases are frequently tested against the Yocto Project and
+ officially supported. For a list of the distributions under
+ validation and their status, see the ":ref:`Supported Linux
+ Distributions <system-requirements-supported-distros>`"
+ section in the Yocto Project Reference Manual and the wiki page at
+ :yocto_wiki:`Distribution Support </Distribution_Support>`.
+
+#. *Have Enough Free Memory:* Your system should have at least 50 Gbytes
+ of free disk space for building images.
+
+#. *Meet Minimal Version Requirements:* The OpenEmbedded build system
+ should be able to run on any modern distribution that has the
+ following versions for Git, tar, Python, gcc and make.
+
+ - Git &MIN_GIT_VERSION; or greater
+
+ - tar &MIN_TAR_VERSION; or greater
+
+ - Python &MIN_PYTHON_VERSION; or greater.
+
+ - gcc &MIN_GCC_VERSION; or greater.
+
+ - GNU make &MIN_MAKE_VERSION; or greater
+
+ If your build host does not meet any of these listed version
+ requirements, you can take steps to prepare the system so that you
+ can still use the Yocto Project. See the
+ ":ref:`ref-manual/system-requirements:required git, tar, python, make and gcc versions`"
+ section in the Yocto Project Reference Manual for information.
+
+#. *Install Development Host Packages:* Required development host
+ packages vary depending on your build host and what you want to do
+ with the Yocto Project. Collectively, the number of required packages
+ is large if you want to be able to cover all cases.
+
+ For lists of required packages for all scenarios, see the
+ ":ref:`ref-manual/system-requirements:required packages for the build host`"
+ section in the Yocto Project Reference Manual.
+
+Once you have completed the previous steps, you are ready to continue
+using a given development path on your native Linux machine. If you are
+going to use BitBake, see the
+":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`"
+section. If you are going
+to use the Extensible SDK, see the ":doc:`/sdk-manual/extensible`" Chapter in the Yocto
+Project Application Development and the Extensible Software Development
+Kit (eSDK) manual. If you want to work on the kernel, see the :doc:`/kernel-dev/index`. If you are going to use
+Toaster, see the ":doc:`/toaster-manual/setup-and-use`"
+section in the Toaster User Manual. If you are a VSCode user, you can configure
+the `Yocto Project BitBake
+<https://marketplace.visualstudio.com/items?itemName=yocto-project.yocto-bitbake>`__
+extension accordingly.
+
+Setting Up to Use CROss PlatformS (CROPS)
+-----------------------------------------
+
+With `CROPS <https://github.com/crops/poky-container>`__, which
+leverages `Docker Containers <https://www.docker.com/>`__, you can
+create a Yocto Project development environment that is operating system
+agnostic. You can set up a container in which you can develop using the
+Yocto Project on a Windows, Mac, or Linux machine.
+
+Follow these general steps to prepare a Windows, Mac, or Linux machine
+as your Yocto Project build host:
+
+#. *Determine What Your Build Host Needs:*
+ `Docker <https://www.docker.com/what-docker>`__ is a software
+ container platform that you need to install on the build host.
+ Depending on your build host, you might have to install different
+ software to support Docker containers. Go to the Docker installation
+ page and read about the platform requirements in "`Supported
+ Platforms <https://docs.docker.com/engine/install/#supported-platforms>`__"
+ your build host needs to run containers.
+
+#. *Choose What To Install:* Depending on whether or not your build host
+ meets system requirements, you need to install "Docker CE Stable" or
+ the "Docker Toolbox". Most situations call for Docker CE. However, if
+ you have a build host that does not meet requirements (e.g.
+ Pre-Windows 10 or Windows 10 "Home" version), you must install Docker
+ Toolbox instead.
+
+#. *Go to the Install Site for Your Platform:* Click the link for the
+ Docker edition associated with your build host's native software. For
+ example, if your build host is running Microsoft Windows Version 10
+ and you want the Docker CE Stable edition, click that link under
+ "Supported Platforms".
+
+#. *Install the Software:* Once you have understood all the
+ pre-requisites, you can download and install the appropriate
+ software. Follow the instructions for your specific machine and the
+ type of the software you need to install:
+
+ - Install `Docker Desktop on
+ Windows <https://docs.docker.com/docker-for-windows/install/#install-docker-desktop-on-windows>`__
+ for Windows build hosts that meet requirements.
+
+ - Install `Docker Desktop on
+ MacOs <https://docs.docker.com/docker-for-mac/install/#install-and-run-docker-desktop-on-mac>`__
+ for Mac build hosts that meet requirements.
+
+ - Install `Docker Engine on
+ CentOS <https://docs.docker.com/engine/install/centos/>`__
+ for Linux build hosts running the CentOS distribution.
+
+ - Install `Docker Engine on
+ Debian <https://docs.docker.com/engine/install/debian/>`__
+ for Linux build hosts running the Debian distribution.
+
+ - Install `Docker Engine for
+ Fedora <https://docs.docker.com/engine/install/fedora/>`__
+ for Linux build hosts running the Fedora distribution.
+
+ - Install `Docker Engine for
+ Ubuntu <https://docs.docker.com/engine/install/ubuntu/>`__
+ for Linux build hosts running the Ubuntu distribution.
+
+#. *Optionally Orient Yourself With Docker:* If you are unfamiliar with
+ Docker and the container concept, you can learn more here -
+ https://docs.docker.com/get-started/.
+
+#. *Launch Docker or Docker Toolbox:* You should be able to launch
+ Docker or the Docker Toolbox and have a terminal shell on your
+ development host.
+
+#. *Set Up the Containers to Use the Yocto Project:* Go to
+ https://github.com/crops/docker-win-mac-docs/wiki and follow
+ the directions for your particular build host (i.e. Linux, Mac, or
+ Windows).
+
+ Once you complete the setup instructions for your machine, you have
+ the Poky, Extensible SDK, and Toaster containers available. You can
+ click those links from the page and learn more about using each of
+ those containers.
+
+Once you have a container set up, everything is in place to develop just
+as if you were running on a native Linux machine. If you are going to
+use the Poky container, see the
+":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`"
+section. If you are going to use the Extensible SDK container, see the
+":doc:`/sdk-manual/extensible`" Chapter in the Yocto
+Project Application Development and the Extensible Software Development
+Kit (eSDK) manual. If you are going to use the Toaster container, see
+the ":doc:`/toaster-manual/setup-and-use`"
+section in the Toaster User Manual. If you are a VSCode user, you can configure
+the `Yocto Project BitBake
+<https://marketplace.visualstudio.com/items?itemName=yocto-project.yocto-bitbake>`__
+extension accordingly.
+
+Setting Up to Use Windows Subsystem For Linux (WSL 2)
+-----------------------------------------------------
+
+With `Windows Subsystem for Linux (WSL 2)
+<https://learn.microsoft.com/en-us/windows/wsl/>`__,
+you can create a Yocto Project development environment that allows you
+to build on Windows. You can set up a Linux distribution inside Windows
+in which you can develop using the Yocto Project.
+
+Follow these general steps to prepare a Windows machine using WSL 2 as
+your Yocto Project build host:
+
+#. *Make sure your Windows machine is capable of running WSL 2:*
+
+ While all Windows 11 and Windows Server 2022 builds support WSL 2,
+ the first versions of Windows 10 and Windows Server 2019 didn't.
+ Check the minimum build numbers for `Windows 10
+ <https://learn.microsoft.com/en-us/windows/wsl/install-manual#step-2---check-requirements-for-running-wsl-2>`__
+ and for `Windows Server 2019
+ <https://learn.microsoft.com/en-us/windows/wsl/install-on-server>`__.
+
+ To check which build version you are running, you may open a command
+ prompt on Windows and execute the command "ver"::
+
+ C:\Users\myuser> ver
+
+ Microsoft Windows [Version 10.0.19041.153]
+
+#. *Install the Linux distribution of your choice inside WSL 2:*
+ Once you know your version of Windows supports WSL 2, you can
+ install the distribution of your choice from the Microsoft Store.
+ Open the Microsoft Store and search for Linux. While there are
+ several Linux distributions available, the assumption is that your
+ pick will be one of the distributions supported by the Yocto Project
+ as stated on the instructions for using a native Linux host. After
+ making your selection, simply click "Get" to download and install the
+ distribution.
+
+#. *Check which Linux distribution WSL 2 is using:* Open a Windows
+ PowerShell and run::
+
+ C:\WINDOWS\system32> wsl -l -v
+ NAME STATE VERSION
+ *Ubuntu Running 2
+
+ Note that WSL 2 supports running as many different Linux distributions
+ as you want to install.
+
+#. *Optionally Get Familiar with WSL:* You can learn more on
+ https://docs.microsoft.com/en-us/windows/wsl/wsl2-about.
+
+#. *Launch your WSL Distibution:* From the Windows start menu simply
+ launch your WSL distribution just like any other application.
+
+#. *Optimize your WSL 2 storage often:* Due to the way storage is
+ handled on WSL 2, the storage space used by the underlying Linux
+ distribution is not reflected immediately, and since BitBake heavily
+ uses storage, after several builds, you may be unaware you are
+ running out of space. As WSL 2 uses a VHDX file for storage, this issue
+ can be easily avoided by regularly optimizing this file in a manual way:
+
+ 1. *Find the location of your VHDX file:*
+
+ First you need to find the distro app package directory, to achieve this
+ open a Windows Powershell as Administrator and run::
+
+ C:\WINDOWS\system32> Get-AppxPackage -Name "*Ubuntu*" | Select PackageFamilyName
+ PackageFamilyName
+ -----------------
+ CanonicalGroupLimited.UbuntuonWindows_79abcdefgh
+
+
+ You should now
+ replace the PackageFamilyName and your user on the following path
+ to find your VHDX file::
+
+ ls C:\Users\myuser\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79abcdefgh\LocalState\
+ Mode LastWriteTime Length Name
+ -a---- 3/14/2020 9:52 PM 57418973184 ext4.vhdx
+
+ Your VHDX file path is:
+ ``C:\Users\myuser\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79abcdefgh\LocalState\ext4.vhdx``
+
+ 2a. *Optimize your VHDX file using Windows Powershell:*
+
+ To use the ``optimize-vhd`` cmdlet below, first install the Hyper-V
+ option on Windows. Then, open a Windows Powershell as Administrator to
+ optimize your VHDX file, shutting down WSL first::
+
+ C:\WINDOWS\system32> wsl --shutdown
+ C:\WINDOWS\system32> optimize-vhd -Path C:\Users\myuser\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79abcdefgh\LocalState\ext4.vhdx -Mode full
+
+ A progress bar should be shown while optimizing the
+ VHDX file, and storage should now be reflected correctly on the
+ Windows Explorer.
+
+ 2b. *Optimize your VHDX file using DiskPart:*
+
+ The ``optimize-vhd`` cmdlet noted in step 2a above is provided by
+ Hyper-V. Not all SKUs of Windows can install Hyper-V. As an alternative,
+ use the DiskPart tool. To start, open a Windows command prompt as
+ Administrator to optimize your VHDX file, shutting down WSL first::
+
+ C:\WINDOWS\system32> wsl --shutdown
+ C:\WINDOWS\system32> diskpart
+
+ DISKPART> select vdisk file="<path_to_VHDX_file>"
+ DISKPART> attach vdisk readonly
+ DISKPART> compact vdisk
+ DISKPART> exit
+
+.. note::
+
+ The current implementation of WSL 2 does not have out-of-the-box
+ access to external devices such as those connected through a USB
+ port, but it automatically mounts your ``C:`` drive on ``/mnt/c/``
+ (and others), which you can use to share deploy artifacts to be later
+ flashed on hardware through Windows, but your :term:`Build Directory`
+ should not reside inside this mountpoint.
+
+Once you have WSL 2 set up, everything is in place to develop just as if
+you were running on a native Linux machine. If you are going to use the
+Extensible SDK container, see the ":doc:`/sdk-manual/extensible`" Chapter in the Yocto
+Project Application Development and the Extensible Software Development
+Kit (eSDK) manual. If you are going to use the Toaster container, see
+the ":doc:`/toaster-manual/setup-and-use`"
+section in the Toaster User Manual. If you are a VSCode user, you can configure
+the `Yocto Project BitBake
+<https://marketplace.visualstudio.com/items?itemName=yocto-project.yocto-bitbake>`__
+extension accordingly.
+
+Locating Yocto Project Source Files
+===================================
+
+This section shows you how to locate, fetch and configure the source
+files you'll need to work with the Yocto Project.
+
+.. note::
+
+ - For concepts and introductory information about Git as it is used
+ in the Yocto Project, see the ":ref:`overview-manual/development-environment:git`"
+ section in the Yocto Project Overview and Concepts Manual.
+
+ - For concepts on Yocto Project source repositories, see the
+ ":ref:`overview-manual/development-environment:yocto project source repositories`"
+ section in the Yocto Project Overview and Concepts Manual."
+
+Accessing Source Repositories
+-----------------------------
+
+Working from a copy of the upstream :ref:`dev-manual/start:accessing source repositories` is the
+preferred method for obtaining and using a Yocto Project release. You
+can view the Yocto Project Source Repositories at
+:yocto_git:`/`. In particular, you can find the ``poky``
+repository at :yocto_git:`/poky`.
+
+Use the following procedure to locate the latest upstream copy of the
+``poky`` Git repository:
+
+#. *Access Repositories:* Open a browser and go to
+ :yocto_git:`/` to access the GUI-based interface into the
+ Yocto Project source repositories.
+
+#. *Select the Repository:* Click on the repository in which you are
+ interested (e.g. ``poky``).
+
+#. *Find the URL Used to Clone the Repository:* At the bottom of the
+ page, note the URL used to clone that repository
+ (e.g. :yocto_git:`/poky`).
+
+ .. note::
+
+ For information on cloning a repository, see the
+ ":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`" section.
+
+Accessing Source Archives
+-------------------------
+
+The Yocto Project also provides source archives of its releases, which
+are available on :yocto_dl:`/releases/yocto/`. Then, choose the subdirectory
+containing the release you wish to use, for example
+:yocto_dl:`yocto-&DISTRO; </releases/yocto/yocto-&DISTRO;/>`.
+
+You will find there source archives of individual components (if you wish
+to use them individually), and of the corresponding Poky release bundling
+a selection of these components.
+
+.. note::
+
+ The recommended method for accessing Yocto Project components is to
+ use Git to clone the upstream repository and work from within that
+ locally cloned repository.
+
+Using the Downloads Page
+------------------------
+
+The :yocto_home:`Yocto Project Website <>` uses a "RELEASES" page
+from which you can locate and download tarballs of any Yocto Project
+release. Rather than Git repositories, these files represent snapshot
+tarballs similar to the tarballs located in the Index of Releases
+described in the ":ref:`dev-manual/start:accessing source archives`" section.
+
+#. *Go to the Yocto Project Website:* Open The
+ :yocto_home:`Yocto Project Website <>` in your browser.
+
+#. *Get to the Downloads Area:* Select the "RELEASES" item from the
+ pull-down "DEVELOPMENT" tab menu near the top of the page.
+
+#. *Select a Yocto Project Release:* On the top of the "RELEASE" page currently
+ supported releases are displayed, further down past supported Yocto Project
+ releases are visible. The "Download" links in the rows of the table there
+ will lead to the download tarballs for the release.
+
+ .. note::
+
+ For a "map" of Yocto Project releases to version numbers, see the
+ :yocto_wiki:`Releases </Releases>` wiki page.
+
+ You can use the "RELEASE ARCHIVE" link to reveal a menu of all Yocto
+ Project releases.
+
+#. *Download Tools or Board Support Packages (BSPs):* Next to the tarballs you
+ will find download tools or BSPs as well. Just select a Yocto Project
+ release and look for what you need.
+
+Cloning and Checking Out Branches
+=================================
+
+To use the Yocto Project for development, you need a release locally
+installed on your development system. This locally installed set of
+files is referred to as the :term:`Source Directory`
+in the Yocto Project documentation.
+
+The preferred method of creating your Source Directory is by using
+:ref:`overview-manual/development-environment:git` to clone a local copy of the upstream
+``poky`` repository. Working from a cloned copy of the upstream
+repository allows you to contribute back into the Yocto Project or to
+simply work with the latest software on a development branch. Because
+Git maintains and creates an upstream repository with a complete history
+of changes and you are working with a local clone of that repository,
+you have access to all the Yocto Project development branches and tag
+names used in the upstream repository.
+
+Cloning the ``poky`` Repository
+-------------------------------
+
+Follow these steps to create a local version of the upstream
+:term:`Poky` Git repository.
+
+#. *Set Your Directory:* Change your working directory to where you want
+ to create your local copy of ``poky``.
+
+#. *Clone the Repository:* The following example command clones the
+ ``poky`` repository and uses the default name "poky" for your local
+ repository::
+
+ $ git clone git://git.yoctoproject.org/poky
+ Cloning into 'poky'...
+ remote: Counting objects: 432160, done.
+ remote: Compressing objects: 100% (102056/102056), done.
+ remote: Total 432160 (delta 323116), reused 432037 (delta 323000)
+ Receiving objects: 100% (432160/432160), 153.81 MiB | 8.54 MiB/s, done.
+ Resolving deltas: 100% (323116/323116), done.
+ Checking connectivity... done.
+
+ Unless you
+ specify a specific development branch or tag name, Git clones the
+ "master" branch, which results in a snapshot of the latest
+ development changes for "master". For information on how to check out
+ a specific development branch or on how to check out a local branch
+ based on a tag name, see the
+ ":ref:`dev-manual/start:checking out by branch in poky`" and
+ ":ref:`dev-manual/start:checking out by tag in poky`" sections, respectively.
+
+ Once the local repository is created, you can change to that
+ directory and check its status. The ``master`` branch is checked out
+ by default::
+
+ $ cd poky
+ $ git status
+ On branch master
+ Your branch is up-to-date with 'origin/master'.
+ nothing to commit, working directory clean
+ $ git branch
+ * master
+
+ Your local repository of poky is identical to the
+ upstream poky repository at the time from which it was cloned. As you
+ work with the local branch, you can periodically use the
+ ``git pull --rebase`` command to be sure you are up-to-date
+ with the upstream branch.
+
+Checking Out by Branch in Poky
+------------------------------
+
+When you clone the upstream poky repository, you have access to all its
+development branches. Each development branch in a repository is unique
+as it forks off the "master" branch. To see and use the files of a
+particular development branch locally, you need to know the branch name
+and then specifically check out that development branch.
+
+.. note::
+
+ Checking out an active development branch by branch name gives you a
+ snapshot of that particular branch at the time you check it out.
+ Further development on top of the branch that occurs after check it
+ out can occur.
+
+#. *Switch to the Poky Directory:* If you have a local poky Git
+ repository, switch to that directory. If you do not have the local
+ copy of poky, see the
+ ":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`"
+ section.
+
+#. *Determine Existing Branch Names:*
+ ::
+
+ $ git branch -a
+ * master
+ remotes/origin/1.1_M1
+ remotes/origin/1.1_M2
+ remotes/origin/1.1_M3
+ remotes/origin/1.1_M4
+ remotes/origin/1.2_M1
+ remotes/origin/1.2_M2
+ remotes/origin/1.2_M3
+ . . .
+ remotes/origin/thud
+ remotes/origin/thud-next
+ remotes/origin/warrior
+ remotes/origin/warrior-next
+ remotes/origin/zeus
+ remotes/origin/zeus-next
+ ... and so on ...
+
+#. *Check out the Branch:* Check out the development branch in which you
+ want to work. For example, to access the files for the Yocto Project
+ &DISTRO; Release (&DISTRO_NAME;), use the following command::
+
+ $ git checkout -b &DISTRO_NAME_NO_CAP; origin/&DISTRO_NAME_NO_CAP;
+ Branch &DISTRO_NAME_NO_CAP; set up to track remote branch &DISTRO_NAME_NO_CAP; from origin.
+ Switched to a new branch '&DISTRO_NAME_NO_CAP;'
+
+ The previous command checks out the "&DISTRO_NAME_NO_CAP;" development
+ branch and reports that the branch is tracking the upstream
+ "origin/&DISTRO_NAME_NO_CAP;" branch.
+
+ The following command displays the branches that are now part of your
+ local poky repository. The asterisk character indicates the branch
+ that is currently checked out for work::
+
+ $ git branch
+ master
+ * &DISTRO_NAME_NO_CAP;
+
+Checking Out by Tag in Poky
+---------------------------
+
+Similar to branches, the upstream repository uses tags to mark specific
+commits associated with significant points in a development branch (i.e.
+a release point or stage of a release). You might want to set up a local
+branch based on one of those points in the repository. The process is
+similar to checking out by branch name except you use tag names.
+
+.. note::
+
+ Checking out a branch based on a tag gives you a stable set of files
+ not affected by development on the branch above the tag.
+
+#. *Switch to the Poky Directory:* If you have a local poky Git
+ repository, switch to that directory. If you do not have the local
+ copy of poky, see the
+ ":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`"
+ section.
+
+#. *Fetch the Tag Names:* To checkout the branch based on a tag name,
+ you need to fetch the upstream tags into your local repository::
+
+ $ git fetch --tags
+ $
+
+#. *List the Tag Names:* You can list the tag names now::
+
+ $ git tag
+ 1.1_M1.final
+ 1.1_M1.rc1
+ 1.1_M1.rc2
+ 1.1_M2.final
+ 1.1_M2.rc1
+ .
+ .
+ .
+ yocto-2.5
+ yocto-2.5.1
+ yocto-2.5.2
+ yocto-2.5.3
+ yocto-2.6
+ yocto-2.6.1
+ yocto-2.6.2
+ yocto-2.7
+ yocto_1.5_M5.rc8
+
+
+#. *Check out the Branch:*
+ ::
+
+ $ git checkout tags/yocto-&DISTRO; -b my_yocto_&DISTRO;
+ Switched to a new branch 'my_yocto_&DISTRO;'
+ $ git branch
+ master
+ * my_yocto_&DISTRO;
+
+ The previous command creates and
+ checks out a local branch named "my_yocto_&DISTRO;", which is based on
+ the commit in the upstream poky repository that has the same tag. In
+ this example, the files you have available locally as a result of the
+ ``checkout`` command are a snapshot of the "&DISTRO_NAME_NO_CAP;"
+ development branch at the point where Yocto Project &DISTRO; was
+ released.
diff --git a/documentation/dev-manual/temporary-source-code.rst b/documentation/dev-manual/temporary-source-code.rst
new file mode 100644
index 0000000000..08bf68d982
--- /dev/null
+++ b/documentation/dev-manual/temporary-source-code.rst
@@ -0,0 +1,66 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Finding Temporary Source Code
+*****************************
+
+You might find it helpful during development to modify the temporary
+source code used by recipes to build packages. For example, suppose you
+are developing a patch and you need to experiment a bit to figure out
+your solution. After you have initially built the package, you can
+iteratively tweak the source code, which is located in the
+:term:`Build Directory`, and then you can force a re-compile and quickly
+test your altered code. Once you settle on a solution, you can then preserve
+your changes in the form of patches.
+
+During a build, the unpacked temporary source code used by recipes to
+build packages is available in the :term:`Build Directory` as defined by the
+:term:`S` variable. Below is the default value for the :term:`S` variable as
+defined in the ``meta/conf/bitbake.conf`` configuration file in the
+:term:`Source Directory`::
+
+ S = "${WORKDIR}/${BP}"
+
+You should be aware that many recipes override the
+:term:`S` variable. For example, recipes that fetch their source from Git
+usually set :term:`S` to ``${WORKDIR}/git``.
+
+.. note::
+
+ The :term:`BP` represents the base recipe name, which consists of the name
+ and version::
+
+ BP = "${BPN}-${PV}"
+
+
+The path to the work directory for the recipe
+(:term:`WORKDIR`) is defined as
+follows::
+
+ ${TMPDIR}/work/${MULTIMACH_TARGET_SYS}/${PN}/${EXTENDPE}${PV}-${PR}
+
+The actual directory depends on several things:
+
+- :term:`TMPDIR`: The top-level build
+ output directory.
+
+- :term:`MULTIMACH_TARGET_SYS`:
+ The target system identifier.
+
+- :term:`PN`: The recipe name.
+
+- :term:`EXTENDPE`: The epoch --- if
+ :term:`PE` is not specified, which is
+ usually the case for most recipes, then :term:`EXTENDPE` is blank.
+
+- :term:`PV`: The recipe version.
+
+- :term:`PR`: The recipe revision.
+
+As an example, assume a Source Directory top-level folder named
+``poky``, a default :term:`Build Directory` at ``poky/build``, and a
+``qemux86-poky-linux`` machine target system. Furthermore, suppose your
+recipe is named ``foo_1.3.0.bb``. In this case, the work directory the
+build system uses to build the package would be as follows::
+
+ poky/build/tmp/work/qemux86-poky-linux/foo/1.3.0-r0
+
diff --git a/documentation/dev-manual/upgrading-recipes.rst b/documentation/dev-manual/upgrading-recipes.rst
new file mode 100644
index 0000000000..4fac78bdfb
--- /dev/null
+++ b/documentation/dev-manual/upgrading-recipes.rst
@@ -0,0 +1,397 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Upgrading Recipes
+*****************
+
+Over time, upstream developers publish new versions for software built
+by layer recipes. It is recommended to keep recipes up-to-date with
+upstream version releases.
+
+While there are several methods to upgrade a recipe, you might
+consider checking on the upgrade status of a recipe first. You can do so
+using the ``devtool check-upgrade-status`` command. See the
+":ref:`devtool-checking-on-the-upgrade-status-of-a-recipe`"
+section in the Yocto Project Reference Manual for more information.
+
+The remainder of this section describes three ways you can upgrade a
+recipe. You can use the Automated Upgrade Helper (AUH) to set up
+automatic version upgrades. Alternatively, you can use
+``devtool upgrade`` to set up semi-automatic version upgrades. Finally,
+you can manually upgrade a recipe by editing the recipe itself.
+
+Using the Auto Upgrade Helper (AUH)
+===================================
+
+The AUH utility works in conjunction with the OpenEmbedded build system
+in order to automatically generate upgrades for recipes based on new
+versions being published upstream. Use AUH when you want to create a
+service that performs the upgrades automatically and optionally sends
+you an email with the results.
+
+AUH allows you to update several recipes with a single use. You can also
+optionally perform build and integration tests using images with the
+results saved to your hard drive and emails of results optionally sent
+to recipe maintainers. Finally, AUH creates Git commits with appropriate
+commit messages in the layer's tree for the changes made to recipes.
+
+.. note::
+
+ In some conditions, you should not use AUH to upgrade recipes
+ and should instead use either ``devtool upgrade`` or upgrade your
+ recipes manually:
+
+ - When AUH cannot complete the upgrade sequence. This situation
+ usually results because custom patches carried by the recipe
+ cannot be automatically rebased to the new version. In this case,
+ ``devtool upgrade`` allows you to manually resolve conflicts.
+
+ - When for any reason you want fuller control over the upgrade
+ process. For example, when you want special arrangements for
+ testing.
+
+The following steps describe how to set up the AUH utility:
+
+#. *Be Sure the Development Host is Set Up:* You need to be sure that
+ your development host is set up to use the Yocto Project. For
+ information on how to set up your host, see the
+ ":ref:`dev-manual/start:Preparing the Build Host`" section.
+
+#. *Make Sure Git is Configured:* The AUH utility requires Git to be
+ configured because AUH uses Git to save upgrades. Thus, you must have
+ Git user and email configured. The following command shows your
+ configurations::
+
+ $ git config --list
+
+ If you do not have the user and
+ email configured, you can use the following commands to do so::
+
+ $ git config --global user.name some_name
+ $ git config --global user.email username@domain.com
+
+#. *Clone the AUH Repository:* To use AUH, you must clone the repository
+ onto your development host. The following command uses Git to create
+ a local copy of the repository on your system::
+
+ $ git clone git://git.yoctoproject.org/auto-upgrade-helper
+ Cloning into 'auto-upgrade-helper'... remote: Counting objects: 768, done.
+ remote: Compressing objects: 100% (300/300), done.
+ remote: Total 768 (delta 499), reused 703 (delta 434)
+ Receiving objects: 100% (768/768), 191.47 KiB | 98.00 KiB/s, done.
+ Resolving deltas: 100% (499/499), done.
+ Checking connectivity... done.
+
+ AUH is not part of the :term:`OpenEmbedded-Core (OE-Core)` or
+ :term:`Poky` repositories.
+
+#. *Create a Dedicated Build Directory:* Run the :ref:`structure-core-script`
+ script to create a fresh :term:`Build Directory` that you use exclusively
+ for running the AUH utility::
+
+ $ cd poky
+ $ source oe-init-build-env your_AUH_build_directory
+
+ Re-using an existing :term:`Build Directory` and its configurations is not
+ recommended as existing settings could cause AUH to fail or behave
+ undesirably.
+
+#. *Make Configurations in Your Local Configuration File:* Several
+ settings are needed in the ``local.conf`` file in the build
+ directory you just created for AUH. Make these following
+ configurations:
+
+ - If you want to enable :ref:`Build
+ History <dev-manual/build-quality:maintaining build output quality>`,
+ which is optional, you need the following lines in the
+ ``conf/local.conf`` file::
+
+ INHERIT =+ "buildhistory"
+ BUILDHISTORY_COMMIT = "1"
+
+ With this configuration and a successful
+ upgrade, a build history "diff" file appears in the
+ ``upgrade-helper/work/recipe/buildhistory-diff.txt`` file found in
+ your :term:`Build Directory`.
+
+ - If you want to enable testing through the :ref:`ref-classes-testimage`
+ class, which is optional, you need to have the following set in
+ your ``conf/local.conf`` file::
+
+ IMAGE_CLASSES += "testimage"
+
+ .. note::
+
+ If your distro does not enable by default ptest, which Poky
+ does, you need the following in your ``local.conf`` file::
+
+ DISTRO_FEATURES:append = " ptest"
+
+
+#. *Optionally Start a vncserver:* If you are running in a server
+ without an X11 session, you need to start a vncserver::
+
+ $ vncserver :1
+ $ export DISPLAY=:1
+
+#. *Create and Edit an AUH Configuration File:* You need to have the
+ ``upgrade-helper/upgrade-helper.conf`` configuration file in your
+ :term:`Build Directory`. You can find a sample configuration file in the
+ :yocto_git:`AUH source repository </auto-upgrade-helper/tree/>`.
+
+ Read through the sample file and make configurations as needed. For
+ example, if you enabled build history in your ``local.conf`` as
+ described earlier, you must enable it in ``upgrade-helper.conf``.
+
+ Also, if you are using the default ``maintainers.inc`` file supplied
+ with Poky and located in ``meta-yocto`` and you do not set a
+ "maintainers_whitelist" or "global_maintainer_override" in the
+ ``upgrade-helper.conf`` configuration, and you specify "-e all" on
+ the AUH command-line, the utility automatically sends out emails to
+ all the default maintainers. Please avoid this.
+
+This next set of examples describes how to use the AUH:
+
+- *Upgrading a Specific Recipe:* To upgrade a specific recipe, use the
+ following form::
+
+ $ upgrade-helper.py recipe_name
+
+ For example, this command upgrades the ``xmodmap`` recipe::
+
+ $ upgrade-helper.py xmodmap
+
+- *Upgrading a Specific Recipe to a Particular Version:* To upgrade a
+ specific recipe to a particular version, use the following form::
+
+ $ upgrade-helper.py recipe_name -t version
+
+ For example, this command upgrades the ``xmodmap`` recipe to version 1.2.3::
+
+ $ upgrade-helper.py xmodmap -t 1.2.3
+
+- *Upgrading all Recipes to the Latest Versions and Suppressing Email
+ Notifications:* To upgrade all recipes to their most recent versions
+ and suppress the email notifications, use the following command::
+
+ $ upgrade-helper.py all
+
+- *Upgrading all Recipes to the Latest Versions and Send Email
+ Notifications:* To upgrade all recipes to their most recent versions
+ and send email messages to maintainers for each attempted recipe as
+ well as a status email, use the following command::
+
+ $ upgrade-helper.py -e all
+
+Once you have run the AUH utility, you can find the results in the AUH
+:term:`Build Directory`::
+
+ ${BUILDDIR}/upgrade-helper/timestamp
+
+The AUH utility
+also creates recipe update commits from successful upgrade attempts in
+the layer tree.
+
+You can easily set up to run the AUH utility on a regular basis by using
+a cron job. See the
+:yocto_git:`weeklyjob.sh </auto-upgrade-helper/tree/weeklyjob.sh>`
+file distributed with the utility for an example.
+
+Using ``devtool upgrade``
+=========================
+
+As mentioned earlier, an alternative method for upgrading recipes to
+newer versions is to use
+:doc:`devtool upgrade </ref-manual/devtool-reference>`.
+You can read about ``devtool upgrade`` in general in the
+":ref:`sdk-manual/extensible:use \`\`devtool upgrade\`\` to create a version of the recipe that supports a newer version of the software`"
+section in the Yocto Project Application Development and the Extensible
+Software Development Kit (eSDK) Manual.
+
+To see all the command-line options available with ``devtool upgrade``,
+use the following help command::
+
+ $ devtool upgrade -h
+
+If you want to find out what version a recipe is currently at upstream
+without any attempt to upgrade your local version of the recipe, you can
+use the following command::
+
+ $ devtool latest-version recipe_name
+
+As mentioned in the previous section describing AUH, ``devtool upgrade``
+works in a less-automated manner than AUH. Specifically,
+``devtool upgrade`` only works on a single recipe that you name on the
+command line, cannot perform build and integration testing using images,
+and does not automatically generate commits for changes in the source
+tree. Despite all these "limitations", ``devtool upgrade`` updates the
+recipe file to the new upstream version and attempts to rebase custom
+patches contained by the recipe as needed.
+
+.. note::
+
+ AUH uses much of ``devtool upgrade`` behind the scenes making AUH somewhat
+ of a "wrapper" application for ``devtool upgrade``.
+
+A typical scenario involves having used Git to clone an upstream
+repository that you use during build operations. Because you have built the
+recipe in the past, the layer is likely added to your
+configuration already. If for some reason, the layer is not added, you
+could add it easily using the
+":ref:`bitbake-layers <bsp-guide/bsp:creating a new bsp layer using the \`\`bitbake-layers\`\` script>`"
+script. For example, suppose you use the ``nano.bb`` recipe from the
+``meta-oe`` layer in the ``meta-openembedded`` repository. For this
+example, assume that the layer has been cloned into following area::
+
+ /home/scottrif/meta-openembedded
+
+The following command from your :term:`Build Directory` adds the layer to
+your build configuration (i.e. ``${BUILDDIR}/conf/bblayers.conf``)::
+
+ $ bitbake-layers add-layer /home/scottrif/meta-openembedded/meta-oe
+ NOTE: Starting bitbake server...
+ Parsing recipes: 100% |##########################################| Time: 0:00:55
+ Parsing of 1431 .bb files complete (0 cached, 1431 parsed). 2040 targets, 56 skipped, 0 masked, 0 errors.
+ Removing 12 recipes from the x86_64 sysroot: 100% |##############| Time: 0:00:00
+ Removing 1 recipes from the x86_64_i586 sysroot: 100% |##########| Time: 0:00:00
+ Removing 5 recipes from the i586 sysroot: 100% |#################| Time: 0:00:00
+ Removing 5 recipes from the qemux86 sysroot: 100% |##############| Time: 0:00:00
+
+For this example, assume that the ``nano.bb`` recipe that
+is upstream has a 2.9.3 version number. However, the version in the
+local repository is 2.7.4. The following command from your build
+directory automatically upgrades the recipe for you::
+
+ $ devtool upgrade nano -V 2.9.3
+ NOTE: Starting bitbake server...
+ NOTE: Creating workspace layer in /home/scottrif/poky/build/workspace
+ Parsing recipes: 100% |##########################################| Time: 0:00:46
+ Parsing of 1431 .bb files complete (0 cached, 1431 parsed). 2040 targets, 56 skipped, 0 masked, 0 errors.
+ NOTE: Extracting current version source...
+ NOTE: Resolving any missing task queue dependencies
+ .
+ .
+ .
+ NOTE: Executing SetScene Tasks
+ NOTE: Executing RunQueue Tasks
+ NOTE: Tasks Summary: Attempted 74 tasks of which 72 didn't need to be rerun and all succeeded.
+ Adding changed files: 100% |#####################################| Time: 0:00:00
+ NOTE: Upgraded source extracted to /home/scottrif/poky/build/workspace/sources/nano
+ NOTE: New recipe is /home/scottrif/poky/build/workspace/recipes/nano/nano_2.9.3.bb
+
+.. note::
+
+ Using the ``-V`` option is not necessary. Omitting the version number causes
+ ``devtool upgrade`` to upgrade the recipe to the most recent version.
+
+Continuing with this example, you can use ``devtool build`` to build the
+newly upgraded recipe::
+
+ $ devtool build nano
+ NOTE: Starting bitbake server...
+ Loading cache: 100% |################################################################################################| Time: 0:00:01
+ Loaded 2040 entries from dependency cache.
+ Parsing recipes: 100% |##############################################################################################| Time: 0:00:00
+ Parsing of 1432 .bb files complete (1431 cached, 1 parsed). 2041 targets, 56 skipped, 0 masked, 0 errors.
+ NOTE: Resolving any missing task queue dependencies
+ .
+ .
+ .
+ NOTE: Executing SetScene Tasks
+ NOTE: Executing RunQueue Tasks
+ NOTE: nano: compiling from external source tree /home/scottrif/poky/build/workspace/sources/nano
+ NOTE: Tasks Summary: Attempted 520 tasks of which 304 didn't need to be rerun and all succeeded.
+
+Within the ``devtool upgrade`` workflow, you can
+deploy and test your rebuilt software. For this example,
+however, running ``devtool finish`` cleans up the workspace once the
+source in your workspace is clean. This usually means using Git to stage
+and submit commits for the changes generated by the upgrade process.
+
+Once the tree is clean, you can clean things up in this example with the
+following command from the ``${BUILDDIR}/workspace/sources/nano``
+directory::
+
+ $ devtool finish nano meta-oe
+ NOTE: Starting bitbake server...
+ Loading cache: 100% |################################################################################################| Time: 0:00:00
+ Loaded 2040 entries from dependency cache.
+ Parsing recipes: 100% |##############################################################################################| Time: 0:00:01
+ Parsing of 1432 .bb files complete (1431 cached, 1 parsed). 2041 targets, 56 skipped, 0 masked, 0 errors.
+ NOTE: Adding new patch 0001-nano.bb-Stuff-I-changed-when-upgrading-nano.bb.patch
+ NOTE: Updating recipe nano_2.9.3.bb
+ NOTE: Removing file /home/scottrif/meta-openembedded/meta-oe/recipes-support/nano/nano_2.7.4.bb
+ NOTE: Moving recipe file to /home/scottrif/meta-openembedded/meta-oe/recipes-support/nano
+ NOTE: Leaving source tree /home/scottrif/poky/build/workspace/sources/nano as-is; if you no longer need it then please delete it manually
+
+
+Using the ``devtool finish`` command cleans up the workspace and creates a patch
+file based on your commits. The tool puts all patch files back into the
+source directory in a sub-directory named ``nano`` in this case.
+
+Manually Upgrading a Recipe
+===========================
+
+If for some reason you choose not to upgrade recipes using
+:ref:`dev-manual/upgrading-recipes:Using the Auto Upgrade Helper (AUH)` or
+by :ref:`dev-manual/upgrading-recipes:Using \`\`devtool upgrade\`\``,
+you can manually edit the recipe files to upgrade the versions.
+
+.. note::
+
+ Manually updating multiple recipes scales poorly and involves many
+ steps. The recommendation to upgrade recipe versions is through AUH
+ or ``devtool upgrade``, both of which automate some steps and provide
+ guidance for others needed for the manual process.
+
+To manually upgrade recipe versions, follow these general steps:
+
+#. *Change the Version:* Rename the recipe such that the version (i.e.
+ the :term:`PV` part of the recipe name)
+ changes appropriately. If the version is not part of the recipe name,
+ change the value as it is set for :term:`PV` within the recipe itself.
+
+#. *Update* :term:`SRCREV` *if Needed*: If the source code your recipe builds
+ is fetched from Git or some other version control system, update
+ :term:`SRCREV` to point to the
+ commit hash that matches the new version.
+
+#. *Build the Software:* Try to build the recipe using BitBake. Typical
+ build failures include the following:
+
+ - License statements were updated for the new version. For this
+ case, you need to review any changes to the license and update the
+ values of :term:`LICENSE` and
+ :term:`LIC_FILES_CHKSUM`
+ as needed.
+
+ .. note::
+
+ License changes are often inconsequential. For example, the
+ license text's copyright year might have changed.
+
+ - Custom patches carried by the older version of the recipe might
+ fail to apply to the new version. For these cases, you need to
+ review the failures. Patches might not be necessary for the new
+ version of the software if the upgraded version has fixed those
+ issues. If a patch is necessary and failing, you need to rebase it
+ into the new version.
+
+#. *Optionally Attempt to Build for Several Architectures:* Once you
+ successfully build the new software for a given architecture, you
+ could test the build for other architectures by changing the
+ :term:`MACHINE` variable and
+ rebuilding the software. This optional step is especially important
+ if the recipe is to be released publicly.
+
+#. *Check the Upstream Change Log or Release Notes:* Checking both these
+ reveals if there are new features that could break
+ backwards-compatibility. If so, you need to take steps to mitigate or
+ eliminate that situation.
+
+#. *Optionally Create a Bootable Image and Test:* If you want, you can
+ test the new software by booting it onto actual hardware.
+
+#. *Create a Commit with the Change in the Layer Repository:* After all
+ builds work and any testing is successful, you can create commits for
+ any changes in the layer holding your upgraded recipe.
+
diff --git a/documentation/dev-manual/vulnerabilities.rst b/documentation/dev-manual/vulnerabilities.rst
new file mode 100644
index 0000000000..1bc2a85929
--- /dev/null
+++ b/documentation/dev-manual/vulnerabilities.rst
@@ -0,0 +1,293 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Checking for Vulnerabilities
+****************************
+
+Vulnerabilities in Poky and OE-Core
+===================================
+
+The Yocto Project has an infrastructure to track and address unfixed
+known security vulnerabilities, as tracked by the public
+:wikipedia:`Common Vulnerabilities and Exposures (CVE) <Common_Vulnerabilities_and_Exposures>`
+database.
+
+The Yocto Project maintains a `list of known vulnerabilities
+<https://autobuilder.yocto.io/pub/non-release/patchmetrics/>`__
+for packages in Poky and OE-Core, tracking the evolution of the number of
+unpatched CVEs and the status of patches. Such information is available for
+the current development version and for each supported release.
+
+Security is a process, not a product, and thus at any time, a number of security
+issues may be impacting Poky and OE-Core. It is up to the maintainers, users,
+contributors and anyone interested in the issues to investigate and possibly fix them by
+updating software components to newer versions or by applying patches to address them.
+It is recommended to work with Poky and OE-Core upstream maintainers and submit
+patches to fix them, see ":doc:`../contributor-guide/submit-changes`" for details.
+
+Vulnerability check at build time
+=================================
+
+To enable a check for CVE security vulnerabilities using
+:ref:`ref-classes-cve-check` in the specific image or target you are building,
+add the following setting to your configuration::
+
+ INHERIT += "cve-check"
+
+The CVE database contains some old incomplete entries which have been
+deemed not to impact Poky or OE-Core. These CVE entries can be excluded from the
+check using build configuration::
+
+ include conf/distro/include/cve-extra-exclusions.inc
+
+With this CVE check enabled, BitBake build will try to map each compiled software component
+recipe name and version information to the CVE database and generate recipe and
+image specific reports. These reports will contain:
+
+- metadata about the software component like names and versions
+
+- metadata about the CVE issue such as description and NVD link
+
+- for each software component, a list of CVEs which are possibly impacting this version
+
+- status of each CVE: ``Patched``, ``Unpatched`` or ``Ignored``
+
+The status ``Patched`` means that a patch file to address the security issue has been
+applied. ``Unpatched`` status means that no patches to address the issue have been
+applied and that the issue needs to be investigated. ``Ignored`` means that after
+analysis, it has been deemed to ignore the issue as it for example affects
+the software component on a different operating system platform.
+
+After a build with CVE check enabled, reports for each compiled source recipe will be
+found in ``build/tmp/deploy/cve``.
+
+For example the CVE check report for the ``flex-native`` recipe looks like::
+
+ $ cat poky/build/tmp/deploy/cve/flex-native
+ LAYER: meta
+ PACKAGE NAME: flex-native
+ PACKAGE VERSION: 2.6.4
+ CVE: CVE-2016-6354
+ CVE STATUS: Patched
+ CVE SUMMARY: Heap-based buffer overflow in the yy_get_next_buffer function in Flex before 2.6.1 might allow context-dependent attackers to cause a denial of service or possibly execute arbitrary code via vectors involving num_to_read.
+ CVSS v2 BASE SCORE: 7.5
+ CVSS v3 BASE SCORE: 9.8
+ VECTOR: NETWORK
+ MORE INFORMATION: https://nvd.nist.gov/vuln/detail/CVE-2016-6354
+
+ LAYER: meta
+ PACKAGE NAME: flex-native
+ PACKAGE VERSION: 2.6.4
+ CVE: CVE-2019-6293
+ CVE STATUS: Ignored
+ CVE SUMMARY: An issue was discovered in the function mark_beginning_as_normal in nfa.c in flex 2.6.4. There is a stack exhaustion problem caused by the mark_beginning_as_normal function making recursive calls to itself in certain scenarios involving lots of '*' characters. Remote attackers could leverage this vulnerability to cause a denial-of-service.
+ CVSS v2 BASE SCORE: 4.3
+ CVSS v3 BASE SCORE: 5.5
+ VECTOR: NETWORK
+ MORE INFORMATION: https://nvd.nist.gov/vuln/detail/CVE-2019-6293
+
+For images, a summary of all recipes included in the image and their CVEs is also
+generated in textual and JSON formats. These ``.cve`` and ``.json`` reports can be found
+in the ``tmp/deploy/images`` directory for each compiled image.
+
+At build time CVE check will also throw warnings about ``Unpatched`` CVEs::
+
+ WARNING: flex-2.6.4-r0 do_cve_check: Found unpatched CVE (CVE-2019-6293), for more information check /poky/build/tmp/work/core2-64-poky-linux/flex/2.6.4-r0/temp/cve.log
+ WARNING: libarchive-3.5.1-r0 do_cve_check: Found unpatched CVE (CVE-2021-36976), for more information check /poky/build/tmp/work/core2-64-poky-linux/libarchive/3.5.1-r0/temp/cve.log
+
+It is also possible to check the CVE status of individual packages as follows::
+
+ bitbake -c cve_check flex libarchive
+
+Fixing CVE product name and version mappings
+============================================
+
+By default, :ref:`ref-classes-cve-check` uses the recipe name :term:`BPN` as CVE
+product name when querying the CVE database. If this mapping contains false positives, e.g.
+some reported CVEs are not for the software component in question, or false negatives like
+some CVEs are not found to impact the recipe when they should, then the problems can be
+in the recipe name to CVE product mapping. These mapping issues can be fixed by setting
+the :term:`CVE_PRODUCT` variable inside the recipe. This defines the name of the software component in the
+upstream `NIST CVE database <https://nvd.nist.gov/>`__.
+
+The variable supports using vendor and product names like this::
+
+ CVE_PRODUCT = "flex_project:flex"
+
+In this example the vendor name used in the CVE database is ``flex_project`` and the
+product is ``flex``. With this setting the ``flex`` recipe only maps to this specific
+product and not products from other vendors with same name ``flex``.
+
+Similarly, when the recipe version :term:`PV` is not compatible with software versions used by
+the upstream software component releases and the CVE database, these can be fixed using
+the :term:`CVE_VERSION` variable.
+
+Note that if the CVE entries in the NVD database contain bugs or have missing or incomplete
+information, it is recommended to fix the information there directly instead of working
+around the issues possibly for a long time in Poky and OE-Core side recipes. Feedback to
+NVD about CVE entries can be provided through the `NVD contact form <https://nvd.nist.gov/info/contact-form>`__.
+
+Fixing vulnerabilities in recipes
+=================================
+
+Suppose a CVE security issue impacts a software component. In that case, it can
+be fixed by updating to a newer version, by applying a patch, or by marking it
+as patched via :term:`CVE_STATUS` variable flag. For Poky and OE-Core master
+branches, updating to a more recent software component release with fixes is
+the best option, but patches can be applied if releases are not yet available.
+
+For stable branches, we want to avoid API (Application Programming Interface)
+or ABI (Application Binary Interface) breakages. When submitting an update,
+a minor version update of a component is preferred if the version is
+backward-compatible. Many software components have backward-compatible stable
+versions, with a notable example of the Linux kernel. However, if the new
+version does or likely might introduce incompatibilities, extracting and
+backporting patches is preferred.
+
+Here is an example of fixing CVE security issues with patch files,
+an example from the :oe_layerindex:`ffmpeg recipe for dunfell </layerindex/recipe/122174>`::
+
+ SRC_URI = "https://www.ffmpeg.org/releases/${BP}.tar.xz \
+ file://mips64_cpu_detection.patch \
+ file://CVE-2020-12284.patch \
+ file://0001-libavutil-include-assembly-with-full-path-from-sourc.patch \
+ file://CVE-2021-3566.patch \
+ file://CVE-2021-38291.patch \
+ file://CVE-2022-1475.patch \
+ file://CVE-2022-3109.patch \
+ file://CVE-2022-3341.patch \
+ file://CVE-2022-48434.patch \
+ "
+
+The recipe has both generic and security-related fixes. The CVE patch files are named
+according to the CVE they fix.
+
+When preparing the patch file, take the original patch from the upstream repository.
+Do not use patches from different distributions, except if it is the only available source.
+
+Modify the patch adding OE-related metadata. We will follow the example of the
+``CVE-2022-3341.patch``.
+
+The original `commit message <https://github.com/FFmpeg/FFmpeg/commit/9cf652cef49d74afe3d454f27d49eb1a1394951e.patch/>`__
+is::
+
+ From 9cf652cef49d74afe3d454f27d49eb1a1394951e Mon Sep 17 00:00:00 2001
+ From: Jiasheng Jiang <jiasheng@iscas.ac.cn>
+ Date: Wed, 23 Feb 2022 10:31:59 +0800
+ Subject: [PATCH] avformat/nutdec: Add check for avformat_new_stream
+
+ Check for failure of avformat_new_stream() and propagate
+ the error code.
+
+ Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+ ---
+ libavformat/nutdec.c | 16 ++++++++++++----
+ 1 file changed, 12 insertions(+), 4 deletions(-)
+
+
+For the correct operations of the ``cve-check``, it requires the CVE
+identification in a ``CVE:`` tag of the patch file commit message using
+the format::
+
+ CVE: CVE-2022-3341
+
+It is also recommended to add the ``Upstream-Status:`` tag with a link
+to the original patch and sign-off by people working on the backport.
+If there are any modifications to the original patch, note them in
+the ``Comments:`` tag.
+
+With the additional information, the header of the patch file in OE-core becomes::
+
+ From 9cf652cef49d74afe3d454f27d49eb1a1394951e Mon Sep 17 00:00:00 2001
+ From: Jiasheng Jiang <jiasheng@iscas.ac.cn>
+ Date: Wed, 23 Feb 2022 10:31:59 +0800
+ Subject: [PATCH] avformat/nutdec: Add check for avformat_new_stream
+
+ Check for failure of avformat_new_stream() and propagate
+ the error code.
+
+ Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+ CVE: CVE-2022-3341
+
+ Upstream-Status: Backport [https://github.com/FFmpeg/FFmpeg/commit/9cf652cef49d74afe3d454f27d49eb1a1394951e]
+
+ Comments: Refreshed Hunk
+ Signed-off-by: Narpat Mali <narpat.mali@windriver.com>
+ Signed-off-by: Bhabu Bindu <bhabu.bindu@kpit.com>
+ ---
+ libavformat/nutdec.c | 16 ++++++++++++----
+ 1 file changed, 12 insertions(+), 4 deletions(-)
+
+A good practice is to include the CVE identifier in the patch file name, the patch file
+commit message and optionally in the recipe commit message.
+
+CVE checker will then capture this information and change the CVE status to ``Patched``
+in the generated reports.
+
+If analysis shows that the CVE issue does not impact the recipe due to configuration, platform,
+version or other reasons, the CVE can be marked as ``Ignored`` by using
+the :term:`CVE_STATUS` variable flag with appropriate reason which is mapped to ``Ignored``.
+The entry should have the format like::
+
+ CVE_STATUS[CVE-2016-10642] = "cpe-incorrect: This is specific to the npm package that installs cmake, so isn't relevant to OpenEmbedded"
+
+As mentioned previously, if data in the CVE database is wrong, it is recommended
+to fix those issues in the CVE database (NVD in the case of OE-core and Poky)
+directly.
+
+Note that if there are many CVEs with the same status and reason, those can be
+shared by using the :term:`CVE_STATUS_GROUPS` variable.
+
+Recipes can be completely skipped by CVE check by including the recipe name in
+the :term:`CVE_CHECK_SKIP_RECIPE` variable.
+
+Implementation details
+======================
+
+Here's what the :ref:`ref-classes-cve-check` class does to find unpatched CVE IDs.
+
+First the code goes through each patch file provided by a recipe. If a valid CVE ID
+is found in the name of the file, the corresponding CVE is considered as patched.
+Don't forget that if multiple CVE IDs are found in the filename, only the last
+one is considered. Then, the code looks for ``CVE: CVE-ID`` lines in the patch
+file. The found CVE IDs are also considered as patched.
+Additionally ``CVE_STATUS`` variable flags are parsed for reasons mapped to ``Patched``
+and these are also considered as patched.
+
+Then, the code looks up all the CVE IDs in the NIST database for all the
+products defined in :term:`CVE_PRODUCT`. Then, for each found CVE:
+
+- If the package name (:term:`PN`) is part of
+ :term:`CVE_CHECK_SKIP_RECIPE`, it is considered as ``Patched``.
+
+- If the CVE ID has status ``CVE_STATUS[<CVE ID>] = "ignored"`` or if it's set to
+ any reason which is mapped to status ``Ignored`` via ``CVE_CHECK_STATUSMAP``,
+ it is set as ``Ignored``.
+
+- If the CVE ID is part of the patched CVE for the recipe, it is
+ already considered as ``Patched``.
+
+- Otherwise, the code checks whether the recipe version (:term:`PV`)
+ is within the range of versions impacted by the CVE. If so, the CVE
+ is considered as ``Unpatched``.
+
+The CVE database is stored in :term:`DL_DIR` and can be inspected using
+``sqlite3`` command as follows::
+
+ sqlite3 downloads/CVE_CHECK/nvdcve_1.1.db .dump | grep CVE-2021-37462
+
+When analyzing CVEs, it is recommended to:
+
+- study the latest information in `CVE database <https://nvd.nist.gov/vuln/search>`__.
+
+- check how upstream developers of the software component addressed the issue, e.g.
+ what patch was applied, which upstream release contains the fix.
+
+- check what other Linux distributions like `Debian <https://security-tracker.debian.org/tracker/>`__
+ did to analyze and address the issue.
+
+- follow security notices from other Linux distributions.
+
+- follow public `open source security mailing lists <https://oss-security.openwall.org/wiki/mailing-lists>`__ for
+ discussions and advance notifications of CVE bugs and software releases with fixes.
+
diff --git a/documentation/dev-manual/wayland.rst b/documentation/dev-manual/wayland.rst
new file mode 100644
index 0000000000..097be9cbde
--- /dev/null
+++ b/documentation/dev-manual/wayland.rst
@@ -0,0 +1,90 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Using Wayland and Weston
+************************
+
+:wikipedia:`Wayland <Wayland_(display_server_protocol)>`
+is a computer display server protocol that provides a method for
+compositing window managers to communicate directly with applications
+and video hardware and expects them to communicate with input hardware
+using other libraries. Using Wayland with supporting targets can result
+in better control over graphics frame rendering than an application
+might otherwise achieve.
+
+The Yocto Project provides the Wayland protocol libraries and the
+reference :wikipedia:`Weston <Wayland_(display_server_protocol)#Weston>`
+compositor as part of its release. You can find the integrated packages
+in the ``meta`` layer of the :term:`Source Directory`.
+Specifically, you
+can find the recipes that build both Wayland and Weston at
+``meta/recipes-graphics/wayland``.
+
+You can build both the Wayland and Weston packages for use only with targets
+that accept the :wikipedia:`Mesa 3D and Direct Rendering Infrastructure
+<Mesa_(computer_graphics)>`, which is also known as Mesa DRI. This implies that
+you cannot build and use the packages if your target uses, for example, the
+Intel Embedded Media and Graphics Driver (Intel EMGD) that overrides Mesa DRI.
+
+.. note::
+
+ Due to lack of EGL support, Weston 1.0.3 will not run directly on the
+ emulated QEMU hardware. However, this version of Weston will run
+ under X emulation without issues.
+
+This section describes what you need to do to implement Wayland and use
+the Weston compositor when building an image for a supporting target.
+
+Enabling Wayland in an Image
+============================
+
+To enable Wayland, you need to enable it to be built and enable it to be
+included (installed) in the image.
+
+Building Wayland
+----------------
+
+To cause Mesa to build the ``wayland-egl`` platform and Weston to build
+Wayland with Kernel Mode Setting
+(`KMS <https://wiki.archlinux.org/index.php/Kernel_Mode_Setting>`__)
+support, include the "wayland" flag in the
+:term:`DISTRO_FEATURES`
+statement in your ``local.conf`` file::
+
+ DISTRO_FEATURES:append = " wayland"
+
+.. note::
+
+ If X11 has been enabled elsewhere, Weston will build Wayland with X11
+ support
+
+Installing Wayland and Weston
+-----------------------------
+
+To install the Wayland feature into an image, you must include the
+following
+:term:`CORE_IMAGE_EXTRA_INSTALL`
+statement in your ``local.conf`` file::
+
+ CORE_IMAGE_EXTRA_INSTALL += "wayland weston"
+
+Running Weston
+==============
+
+To run Weston inside X11, enabling it as described earlier and building
+a Sato image is sufficient. If you are running your image under Sato, a
+Weston Launcher appears in the "Utility" category.
+
+Alternatively, you can run Weston through the command-line interpretor
+(CLI), which is better suited for development work. To run Weston under
+the CLI, you need to do the following after your image is built:
+
+#. Run these commands to export ``XDG_RUNTIME_DIR``::
+
+ mkdir -p /tmp/$USER-weston
+ chmod 0700 /tmp/$USER-weston
+ export XDG_RUNTIME_DIR=/tmp/$USER-weston
+
+#. Launch Weston in the shell::
+
+ weston
+
diff --git a/documentation/dev-manual/wic.rst b/documentation/dev-manual/wic.rst
new file mode 100644
index 0000000000..a3880f3a1c
--- /dev/null
+++ b/documentation/dev-manual/wic.rst
@@ -0,0 +1,731 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Creating Partitioned Images Using Wic
+*************************************
+
+Creating an image for a particular hardware target using the
+OpenEmbedded build system does not necessarily mean you can boot that
+image as is on your device. Physical devices accept and boot images in
+various ways depending on the specifics of the device. Usually,
+information about the hardware can tell you what image format the device
+requires. Should your device require multiple partitions on an SD card,
+flash, or an HDD, you can use the OpenEmbedded Image Creator, Wic, to
+create the properly partitioned image.
+
+The ``wic`` command generates partitioned images from existing
+OpenEmbedded build artifacts. Image generation is driven by partitioning
+commands contained in an OpenEmbedded kickstart file (``.wks``)
+specified either directly on the command line or as one of a selection
+of canned kickstart files as shown with the ``wic list images`` command
+in the
+":ref:`dev-manual/wic:generate an image using an existing kickstart file`"
+section. When you apply the command to a given set of build artifacts, the
+result is an image or set of images that can be directly written onto media and
+used on a particular system.
+
+.. note::
+
+ For a kickstart file reference, see the
+ ":ref:`ref-manual/kickstart:openembedded kickstart (\`\`.wks\`\`) reference`"
+ Chapter in the Yocto Project Reference Manual.
+
+The ``wic`` command and the infrastructure it is based on is by
+definition incomplete. The purpose of the command is to allow the
+generation of customized images, and as such, was designed to be
+completely extensible through a plugin interface. See the
+":ref:`dev-manual/wic:using the wic plugin interface`" section
+for information on these plugins.
+
+This section provides some background information on Wic, describes what
+you need to have in place to run the tool, provides instruction on how
+to use the Wic utility, provides information on using the Wic plugins
+interface, and provides several examples that show how to use Wic.
+
+Background
+==========
+
+This section provides some background on the Wic utility. While none of
+this information is required to use Wic, you might find it interesting.
+
+- The name "Wic" is derived from OpenEmbedded Image Creator (oeic). The
+ "oe" diphthong in "oeic" was promoted to the letter "w", because
+ "oeic" is both difficult to remember and to pronounce.
+
+- Wic is loosely based on the Meego Image Creator (``mic``) framework.
+ The Wic implementation has been heavily modified to make direct use
+ of OpenEmbedded build artifacts instead of package installation and
+ configuration, which are already incorporated within the OpenEmbedded
+ artifacts.
+
+- Wic is a completely independent standalone utility that initially
+ provides easier-to-use and more flexible replacements for an existing
+ functionality in OE-Core's :ref:`ref-classes-image-live`
+ class. The difference between Wic and those examples is that with Wic
+ the functionality of those scripts is implemented by a
+ general-purpose partitioning language, which is based on Redhat
+ kickstart syntax.
+
+Requirements
+============
+
+In order to use the Wic utility with the OpenEmbedded Build system, your
+system needs to meet the following requirements:
+
+- The Linux distribution on your development host must support the
+ Yocto Project. See the ":ref:`system-requirements-supported-distros`"
+ section in the Yocto Project Reference Manual for the list of
+ distributions that support the Yocto Project.
+
+- The standard system utilities, such as ``cp``, must be installed on
+ your development host system.
+
+- You must have sourced the build environment setup script (i.e.
+ :ref:`structure-core-script`) found in the :term:`Build Directory`.
+
+- You need to have the build artifacts already available, which
+ typically means that you must have already created an image using the
+ OpenEmbedded build system (e.g. ``core-image-minimal``). While it
+ might seem redundant to generate an image in order to create an image
+ using Wic, the current version of Wic requires the artifacts in the
+ form generated by the OpenEmbedded build system.
+
+- You must build several native tools, which are built to run on the
+ build system::
+
+ $ bitbake wic-tools
+
+- Include "wic" as part of the
+ :term:`IMAGE_FSTYPES`
+ variable.
+
+- Include the name of the :ref:`wic kickstart file <openembedded-kickstart-wks-reference>`
+ as part of the :term:`WKS_FILE` variable. If multiple candidate files can
+ be provided by different layers, specify all the possible names through the
+ :term:`WKS_FILES` variable instead.
+
+Getting Help
+============
+
+You can get general help for the ``wic`` command by entering the ``wic``
+command by itself or by entering the command with a help argument as
+follows::
+
+ $ wic -h
+ $ wic --help
+ $ wic help
+
+Currently, Wic supports seven commands: ``cp``, ``create``, ``help``,
+``list``, ``ls``, ``rm``, and ``write``. You can get help for all these
+commands except "help" by using the following form::
+
+ $ wic help command
+
+For example, the following command returns help for the ``write``
+command::
+
+ $ wic help write
+
+Wic supports help for three topics: ``overview``, ``plugins``, and
+``kickstart``. You can get help for any topic using the following form::
+
+ $ wic help topic
+
+For example, the following returns overview help for Wic::
+
+ $ wic help overview
+
+There is one additional level of help for Wic. You can get help on
+individual images through the ``list`` command. You can use the ``list``
+command to return the available Wic images as follows::
+
+ $ wic list images
+ genericx86 Create an EFI disk image for genericx86*
+ beaglebone-yocto Create SD card image for Beaglebone
+ qemuriscv Create qcow2 image for RISC-V QEMU machines
+ mkefidisk Create an EFI disk image
+ qemuloongarch Create qcow2 image for LoongArch QEMU machines
+ directdisk-multi-rootfs Create multi rootfs image using rootfs plugin
+ directdisk Create a 'pcbios' direct disk image
+ efi-bootdisk
+ mkhybridiso Create a hybrid ISO image
+ directdisk-gpt Create a 'pcbios' direct disk image
+ systemd-bootdisk Create an EFI disk image with systemd-boot
+ sdimage-bootpart Create SD card image with a boot partition
+ qemux86-directdisk Create a qemu machine 'pcbios' direct disk image
+ directdisk-bootloader-config Create a 'pcbios' direct disk image with custom bootloader config
+
+Once you know the list of available
+Wic images, you can use ``help`` with the command to get help on a
+particular image. For example, the following command returns help on the
+"beaglebone-yocto" image::
+
+ $ wic list beaglebone-yocto help
+
+ Creates a partitioned SD card image for Beaglebone.
+ Boot files are located in the first vfat partition.
+
+Operational Modes
+=================
+
+You can use Wic in two different modes, depending on how much control
+you need for specifying the OpenEmbedded build artifacts that are used
+for creating the image: Raw and Cooked:
+
+- *Raw Mode:* You explicitly specify build artifacts through Wic
+ command-line arguments.
+
+- *Cooked Mode:* The current
+ :term:`MACHINE` setting and image
+ name are used to automatically locate and provide the build
+ artifacts. You just supply a kickstart file and the name of the image
+ from which to use artifacts.
+
+Regardless of the mode you use, you need to have the build artifacts
+ready and available.
+
+Raw Mode
+--------
+
+Running Wic in raw mode allows you to specify all the partitions through
+the ``wic`` command line. The primary use for raw mode is if you have
+built your kernel outside of the Yocto Project :term:`Build Directory`.
+In other words, you can point to arbitrary kernel, root filesystem locations,
+and so forth. Contrast this behavior with cooked mode where Wic looks in the
+:term:`Build Directory` (e.g. ``tmp/deploy/images/``\ machine).
+
+The general form of the ``wic`` command in raw mode is::
+
+ $ wic create wks_file options ...
+
+ Where:
+
+ wks_file:
+ An OpenEmbedded kickstart file. You can provide
+ your own custom file or use a file from a set of
+ existing files as described by further options.
+
+ optional arguments:
+ -h, --help show this help message and exit
+ -o OUTDIR, --outdir OUTDIR
+ name of directory to create image in
+ -e IMAGE_NAME, --image-name IMAGE_NAME
+ name of the image to use the artifacts from e.g. core-
+ image-sato
+ -r ROOTFS_DIR, --rootfs-dir ROOTFS_DIR
+ path to the /rootfs dir to use as the .wks rootfs
+ source
+ -b BOOTIMG_DIR, --bootimg-dir BOOTIMG_DIR
+ path to the dir containing the boot artifacts (e.g.
+ /EFI or /syslinux dirs) to use as the .wks bootimg
+ source
+ -k KERNEL_DIR, --kernel-dir KERNEL_DIR
+ path to the dir containing the kernel to use in the
+ .wks bootimg
+ -n NATIVE_SYSROOT, --native-sysroot NATIVE_SYSROOT
+ path to the native sysroot containing the tools to use
+ to build the image
+ -s, --skip-build-check
+ skip the build check
+ -f, --build-rootfs build rootfs
+ -c {gzip,bzip2,xz}, --compress-with {gzip,bzip2,xz}
+ compress image with specified compressor
+ -m, --bmap generate .bmap
+ --no-fstab-update Do not change fstab file.
+ -v VARS_DIR, --vars VARS_DIR
+ directory with <image>.env files that store bitbake
+ variables
+ -D, --debug output debug information
+
+.. note::
+
+ You do not need root privileges to run Wic. In fact, you should not
+ run as root when using the utility.
+
+Cooked Mode
+-----------
+
+Running Wic in cooked mode leverages off artifacts in the
+:term:`Build Directory`. In other words, you do not have to specify kernel or
+root filesystem locations as part of the command. All you need to provide is
+a kickstart file and the name of the image from which to use artifacts
+by using the "-e" option. Wic looks in the :term:`Build Directory` (e.g.
+``tmp/deploy/images/``\ machine) for artifacts.
+
+The general form of the ``wic`` command using Cooked Mode is as follows::
+
+ $ wic create wks_file -e IMAGE_NAME
+
+ Where:
+
+ wks_file:
+ An OpenEmbedded kickstart file. You can provide
+ your own custom file or use a file from a set of
+ existing files provided with the Yocto Project
+ release.
+
+ required argument:
+ -e IMAGE_NAME, --image-name IMAGE_NAME
+ name of the image to use the artifacts from e.g. core-
+ image-sato
+
+Using an Existing Kickstart File
+================================
+
+If you do not want to create your own kickstart file, you can use an
+existing file provided by the Wic installation. As shipped, kickstart
+files can be found in the :ref:`overview-manual/development-environment:yocto project source repositories` in the
+following two locations::
+
+ poky/meta-yocto-bsp/wic
+ poky/scripts/lib/wic/canned-wks
+
+Use the following command to list the available kickstart files::
+
+ $ wic list images
+ genericx86 Create an EFI disk image for genericx86*
+ beaglebone-yocto Create SD card image for Beaglebone
+ qemuriscv Create qcow2 image for RISC-V QEMU machines
+ mkefidisk Create an EFI disk image
+ qemuloongarch Create qcow2 image for LoongArch QEMU machines
+ directdisk-multi-rootfs Create multi rootfs image using rootfs plugin
+ directdisk Create a 'pcbios' direct disk image
+ efi-bootdisk
+ mkhybridiso Create a hybrid ISO image
+ directdisk-gpt Create a 'pcbios' direct disk image
+ systemd-bootdisk Create an EFI disk image with systemd-boot
+ sdimage-bootpart Create SD card image with a boot partition
+ qemux86-directdisk Create a qemu machine 'pcbios' direct disk image
+ directdisk-bootloader-config Create a 'pcbios' direct disk image with custom bootloader config
+
+When you use an existing file, you
+do not have to use the ``.wks`` extension. Here is an example in Raw
+Mode that uses the ``directdisk`` file::
+
+ $ wic create directdisk -r rootfs_dir -b bootimg_dir \
+ -k kernel_dir -n native_sysroot
+
+Here are the actual partition language commands used in the
+``genericx86.wks`` file to generate an image::
+
+ # short-description: Create an EFI disk image for genericx86*
+ # long-description: Creates a partitioned EFI disk image for genericx86* machines
+ part /boot --source bootimg-efi --sourceparams="loader=grub-efi" --ondisk sda --label msdos --active --align 1024
+ part / --source rootfs --ondisk sda --fstype=ext4 --label platform --align 1024 --use-uuid
+ part swap --ondisk sda --size 44 --label swap1 --fstype=swap
+
+ bootloader --ptable gpt --timeout=5 --append="rootfstype=ext4 console=ttyS0,115200 console=tty0"
+
+Using the Wic Plugin Interface
+==============================
+
+You can extend and specialize Wic functionality by using Wic plugins.
+This section explains the Wic plugin interface.
+
+.. note::
+
+ Wic plugins consist of "source" and "imager" plugins. Imager plugins
+ are beyond the scope of this section.
+
+Source plugins provide a mechanism to customize partition content during
+the Wic image generation process. You can use source plugins to map
+values that you specify using ``--source`` commands in kickstart files
+(i.e. ``*.wks``) to a plugin implementation used to populate a given
+partition.
+
+.. note::
+
+ If you use plugins that have build-time dependencies (e.g. native
+ tools, bootloaders, and so forth) when building a Wic image, you need
+ to specify those dependencies using the :term:`WKS_FILE_DEPENDS`
+ variable.
+
+Source plugins are subclasses defined in plugin files. As shipped, the
+Yocto Project provides several plugin files. You can see the source
+plugin files that ship with the Yocto Project
+:yocto_git:`here </poky/tree/scripts/lib/wic/plugins/source>`.
+Each of these plugin files contains source plugins that are designed to
+populate a specific Wic image partition.
+
+Source plugins are subclasses of the ``SourcePlugin`` class, which is
+defined in the ``poky/scripts/lib/wic/pluginbase.py`` file. For example,
+the ``BootimgEFIPlugin`` source plugin found in the ``bootimg-efi.py``
+file is a subclass of the ``SourcePlugin`` class, which is found in the
+``pluginbase.py`` file.
+
+You can also implement source plugins in a layer outside of the Source
+Repositories (external layer). To do so, be sure that your plugin files
+are located in a directory whose path is
+``scripts/lib/wic/plugins/source/`` within your external layer. When the
+plugin files are located there, the source plugins they contain are made
+available to Wic.
+
+When the Wic implementation needs to invoke a partition-specific
+implementation, it looks for the plugin with the same name as the
+``--source`` parameter used in the kickstart file given to that
+partition. For example, if the partition is set up using the following
+command in a kickstart file::
+
+ part /boot --source bootimg-pcbios --ondisk sda --label boot --active --align 1024
+
+The methods defined as class
+members of the matching source plugin (i.e. ``bootimg-pcbios``) in the
+``bootimg-pcbios.py`` plugin file are used.
+
+To be more concrete, here is the corresponding plugin definition from
+the ``bootimg-pcbios.py`` file for the previous command along with an
+example method called by the Wic implementation when it needs to prepare
+a partition using an implementation-specific function::
+
+ .
+ .
+ .
+ class BootimgPcbiosPlugin(SourcePlugin):
+ """
+ Create MBR boot partition and install syslinux on it.
+ """
+
+ name = 'bootimg-pcbios'
+ .
+ .
+ .
+ @classmethod
+ def do_prepare_partition(cls, part, source_params, creator, cr_workdir,
+ oe_builddir, bootimg_dir, kernel_dir,
+ rootfs_dir, native_sysroot):
+ """
+ Called to do the actual content population for a partition i.e. it
+ 'prepares' the partition to be incorporated into the image.
+ In this case, prepare content for legacy bios boot partition.
+ """
+ .
+ .
+ .
+
+If a
+subclass (plugin) itself does not implement a particular function, Wic
+locates and uses the default version in the superclass. It is for this
+reason that all source plugins are derived from the ``SourcePlugin``
+class.
+
+The ``SourcePlugin`` class defined in the ``pluginbase.py`` file defines
+a set of methods that source plugins can implement or override. Any
+plugins (subclass of ``SourcePlugin``) that do not implement a
+particular method inherit the implementation of the method from the
+``SourcePlugin`` class. For more information, see the ``SourcePlugin``
+class in the ``pluginbase.py`` file for details:
+
+The following list describes the methods implemented in the
+``SourcePlugin`` class:
+
+- ``do_prepare_partition()``: Called to populate a partition with
+ actual content. In other words, the method prepares the final
+ partition image that is incorporated into the disk image.
+
+- ``do_configure_partition()``: Called before
+ ``do_prepare_partition()`` to create custom configuration files for a
+ partition (e.g. syslinux or grub configuration files).
+
+- ``do_install_disk()``: Called after all partitions have been
+ prepared and assembled into a disk image. This method provides a hook
+ to allow finalization of a disk image (e.g. writing an MBR).
+
+- ``do_stage_partition()``: Special content-staging hook called
+ before ``do_prepare_partition()``. This method is normally empty.
+
+ Typically, a partition just uses the passed-in parameters (e.g. the
+ unmodified value of ``bootimg_dir``). However, in some cases, things
+ might need to be more tailored. As an example, certain files might
+ additionally need to be taken from ``bootimg_dir + /boot``. This hook
+ allows those files to be staged in a customized fashion.
+
+ .. note::
+
+ ``get_bitbake_var()`` allows you to access non-standard variables that
+ you might want to use for this behavior.
+
+You can extend the source plugin mechanism. To add more hooks, create
+more source plugin methods within ``SourcePlugin`` and the corresponding
+derived subclasses. The code that calls the plugin methods uses the
+``plugin.get_source_plugin_methods()`` function to find the method or
+methods needed by the call. Retrieval of those methods is accomplished
+by filling up a dict with keys that contain the method names of
+interest. On success, these will be filled in with the actual methods.
+See the Wic implementation for examples and details.
+
+Wic Examples
+============
+
+This section provides several examples that show how to use the Wic
+utility. All the examples assume the list of requirements in the
+":ref:`dev-manual/wic:requirements`" section have been met. The
+examples assume the previously generated image is
+``core-image-minimal``.
+
+Generate an Image using an Existing Kickstart File
+--------------------------------------------------
+
+This example runs in Cooked Mode and uses the ``mkefidisk`` kickstart
+file::
+
+ $ wic create mkefidisk -e core-image-minimal
+ INFO: Building wic-tools...
+ .
+ .
+ .
+ INFO: The new image(s) can be found here:
+ ./mkefidisk-201804191017-sda.direct
+
+ The following build artifacts were used to create the image(s):
+ ROOTFS_DIR: /home/stephano/yocto/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/rootfs
+ BOOTIMG_DIR: /home/stephano/yocto/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share
+ KERNEL_DIR: /home/stephano/yocto/build/tmp-glibc/deploy/images/qemux86
+ NATIVE_SYSROOT: /home/stephano/yocto/build/tmp-glibc/work/i586-oe-linux/wic-tools/1.0-r0/recipe-sysroot-native
+
+ INFO: The image(s) were created using OE kickstart file:
+ /home/stephano/yocto/openembedded-core/scripts/lib/wic/canned-wks/mkefidisk.wks
+
+The previous example shows the easiest way to create an image by running
+in cooked mode and supplying a kickstart file and the "-e" option to
+point to the existing build artifacts. Your ``local.conf`` file needs to
+have the :term:`MACHINE` variable set
+to the machine you are using, which is "qemux86" in this example.
+
+Once the image builds, the output provides image location, artifact use,
+and kickstart file information.
+
+.. note::
+
+ You should always verify the details provided in the output to make
+ sure that the image was indeed created exactly as expected.
+
+Continuing with the example, you can now write the image from the
+:term:`Build Directory` onto a USB stick, or whatever media for which you
+built your image, and boot from the media. You can write the image by using
+``bmaptool`` or ``dd``::
+
+ $ oe-run-native bmaptool-native bmaptool copy mkefidisk-201804191017-sda.direct /dev/sdX
+
+or ::
+
+ $ sudo dd if=mkefidisk-201804191017-sda.direct of=/dev/sdX
+
+.. note::
+
+ For more information on how to use the ``bmaptool``
+ to flash a device with an image, see the
+ ":ref:`dev-manual/bmaptool:flashing images using \`\`bmaptool\`\``"
+ section.
+
+Using a Modified Kickstart File
+-------------------------------
+
+Because partitioned image creation is driven by the kickstart file, it
+is easy to affect image creation by changing the parameters in the file.
+This next example demonstrates that through modification of the
+``directdisk-gpt`` kickstart file.
+
+As mentioned earlier, you can use the command ``wic list images`` to
+show the list of existing kickstart files. The directory in which the
+``directdisk-gpt.wks`` file resides is
+``scripts/lib/image/canned-wks/``, which is located in the
+:term:`Source Directory` (e.g. ``poky``).
+Because available files reside in this directory, you can create and add
+your own custom files to the directory. Subsequent use of the
+``wic list images`` command would then include your kickstart files.
+
+In this example, the existing ``directdisk-gpt`` file already does most
+of what is needed. However, for the hardware in this example, the image
+will need to boot from ``sdb`` instead of ``sda``, which is what the
+``directdisk-gpt`` kickstart file uses.
+
+The example begins by making a copy of the ``directdisk-gpt.wks`` file
+in the ``scripts/lib/image/canned-wks`` directory and then by changing
+the lines that specify the target disk from which to boot::
+
+ $ cp /home/stephano/yocto/poky/scripts/lib/wic/canned-wks/directdisk-gpt.wks \
+ /home/stephano/yocto/poky/scripts/lib/wic/canned-wks/directdisksdb-gpt.wks
+
+Next, the example modifies the ``directdisksdb-gpt.wks`` file and
+changes all instances of "``--ondisk sda``" to "``--ondisk sdb``". The
+example changes the following two lines and leaves the remaining lines
+untouched::
+
+ part /boot --source bootimg-pcbios --ondisk sdb --label boot --active --align 1024
+ part / --source rootfs --ondisk sdb --fstype=ext4 --label platform --align 1024 --use-uuid
+
+Once the lines are changed, the
+example generates the ``directdisksdb-gpt`` image. The command points
+the process at the ``core-image-minimal`` artifacts for the Next Unit of
+Computing (nuc) :term:`MACHINE` the
+``local.conf``::
+
+ $ wic create directdisksdb-gpt -e core-image-minimal
+ INFO: Building wic-tools...
+ .
+ .
+ .
+ Initialising tasks: 100% |#######################################| Time: 0:00:01
+ NOTE: Executing SetScene Tasks
+ NOTE: Executing RunQueue Tasks
+ NOTE: Tasks Summary: Attempted 1161 tasks of which 1157 didn't need to be rerun and all succeeded.
+ INFO: Creating image(s)...
+
+ INFO: The new image(s) can be found here:
+ ./directdisksdb-gpt-201710090938-sdb.direct
+
+ The following build artifacts were used to create the image(s):
+ ROOTFS_DIR: /home/stephano/yocto/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/rootfs
+ BOOTIMG_DIR: /home/stephano/yocto/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share
+ KERNEL_DIR: /home/stephano/yocto/build/tmp-glibc/deploy/images/qemux86
+ NATIVE_SYSROOT: /home/stephano/yocto/build/tmp-glibc/work/i586-oe-linux/wic-tools/1.0-r0/recipe-sysroot-native
+
+ INFO: The image(s) were created using OE kickstart file:
+ /home/stephano/yocto/poky/scripts/lib/wic/canned-wks/directdisksdb-gpt.wks
+
+Continuing with the example, you can now directly ``dd`` the image to a
+USB stick, or whatever media for which you built your image, and boot
+the resulting media::
+
+ $ sudo dd if=directdisksdb-gpt-201710090938-sdb.direct of=/dev/sdb
+ 140966+0 records in
+ 140966+0 records out
+ 72174592 bytes (72 MB, 69 MiB) copied, 78.0282 s, 925 kB/s
+ $ sudo eject /dev/sdb
+
+Using a Modified Kickstart File and Running in Raw Mode
+-------------------------------------------------------
+
+This next example manually specifies each build artifact (runs in Raw
+Mode) and uses a modified kickstart file. The example also uses the
+``-o`` option to cause Wic to create the output somewhere other than the
+default output directory, which is the current directory::
+
+ $ wic create test.wks -o /home/stephano/testwic \
+ --rootfs-dir /home/stephano/yocto/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/rootfs \
+ --bootimg-dir /home/stephano/yocto/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share \
+ --kernel-dir /home/stephano/yocto/build/tmp/deploy/images/qemux86 \
+ --native-sysroot /home/stephano/yocto/build/tmp/work/i586-poky-linux/wic-tools/1.0-r0/recipe-sysroot-native
+
+ INFO: Creating image(s)...
+
+ INFO: The new image(s) can be found here:
+ /home/stephano/testwic/test-201710091445-sdb.direct
+
+ The following build artifacts were used to create the image(s):
+ ROOTFS_DIR: /home/stephano/yocto/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/rootfs
+ BOOTIMG_DIR: /home/stephano/yocto/build/tmp-glibc/work/qemux86-oe-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share
+ KERNEL_DIR: /home/stephano/yocto/build/tmp-glibc/deploy/images/qemux86
+ NATIVE_SYSROOT: /home/stephano/yocto/build/tmp-glibc/work/i586-oe-linux/wic-tools/1.0-r0/recipe-sysroot-native
+
+ INFO: The image(s) were created using OE kickstart file:
+ test.wks
+
+For this example,
+:term:`MACHINE` did not have to be
+specified in the ``local.conf`` file since the artifact is manually
+specified.
+
+Using Wic to Manipulate an Image
+--------------------------------
+
+Wic image manipulation allows you to shorten turnaround time during
+image development. For example, you can use Wic to delete the kernel
+partition of a Wic image and then insert a newly built kernel. This
+saves you time from having to rebuild the entire image each time you
+modify the kernel.
+
+.. note::
+
+ In order to use Wic to manipulate a Wic image as in this example,
+ your development machine must have the ``mtools`` package installed.
+
+The following example examines the contents of the Wic image, deletes
+the existing kernel, and then inserts a new kernel:
+
+#. *List the Partitions:* Use the ``wic ls`` command to list all the
+ partitions in the Wic image::
+
+ $ wic ls tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic
+ Num Start End Size Fstype
+ 1 1048576 25041919 23993344 fat16
+ 2 25165824 72157183 46991360 ext4
+
+ The previous output shows two partitions in the
+ ``core-image-minimal-qemux86.wic`` image.
+
+#. *Examine a Particular Partition:* Use the ``wic ls`` command again
+ but in a different form to examine a particular partition.
+
+ .. note::
+
+ You can get command usage on any Wic command using the following
+ form::
+
+ $ wic help command
+
+
+ For example, the following command shows you the various ways to
+ use the
+ wic ls
+ command::
+
+ $ wic help ls
+
+
+ The following command shows what is in partition one::
+
+ $ wic ls tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1
+ Volume in drive : is boot
+ Volume Serial Number is E894-1809
+ Directory for ::/
+
+ libcom32 c32 186500 2017-10-09 16:06
+ libutil c32 24148 2017-10-09 16:06
+ syslinux cfg 220 2017-10-09 16:06
+ vesamenu c32 27104 2017-10-09 16:06
+ vmlinuz 6904608 2017-10-09 16:06
+ 5 files 7 142 580 bytes
+ 16 582 656 bytes free
+
+ The previous output shows five files, with the
+ ``vmlinuz`` being the kernel.
+
+ .. note::
+
+ If you see the following error, you need to update or create a
+ ``~/.mtoolsrc`` file and be sure to have the line "mtools_skip_check=1"
+ in the file. Then, run the Wic command again::
+
+ ERROR: _exec_cmd: /usr/bin/mdir -i /tmp/wic-parttfokuwra ::/ returned '1' instead of 0
+ output: Total number of sectors (47824) not a multiple of sectors per track (32)!
+ Add mtools_skip_check=1 to your .mtoolsrc file to skip this test
+
+
+#. *Remove the Old Kernel:* Use the ``wic rm`` command to remove the
+ ``vmlinuz`` file (kernel)::
+
+ $ wic rm tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1/vmlinuz
+
+#. *Add In the New Kernel:* Use the ``wic cp`` command to add the
+ updated kernel to the Wic image. Depending on how you built your
+ kernel, it could be in different places. If you used ``devtool`` and
+ an SDK to build your kernel, it resides in the ``tmp/work`` directory
+ of the extensible SDK. If you used ``make`` to build the kernel, the
+ kernel will be in the ``workspace/sources`` area.
+
+ The following example assumes ``devtool`` was used to build the
+ kernel::
+
+ $ wic cp poky_sdk/tmp/work/qemux86-poky-linux/linux-yocto/4.12.12+git999-r0/linux-yocto-4.12.12+git999/arch/x86/boot/bzImage \
+ poky/build/tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1/vmlinuz
+
+ Once the new kernel is added back into the image, you can use the
+ ``dd`` command or :ref:`bmaptool
+ <dev-manual/bmaptool:flashing images using \`\`bmaptool\`\`>`
+ to flash your wic image onto an SD card or USB stick and test your
+ target.
+
+ .. note::
+
+ Using ``bmaptool`` is generally 10 to 20 times faster than using ``dd``.
+
diff --git a/documentation/dev-manual/x32-psabi.rst b/documentation/dev-manual/x32-psabi.rst
new file mode 100644
index 0000000000..92b1f96fa4
--- /dev/null
+++ b/documentation/dev-manual/x32-psabi.rst
@@ -0,0 +1,54 @@
+.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
+
+Using x32 psABI
+***************
+
+x32 processor-specific Application Binary Interface (`x32
+psABI <https://software.intel.com/en-us/node/628948>`__) is a native
+32-bit processor-specific ABI for Intel 64 (x86-64) architectures. An
+ABI defines the calling conventions between functions in a processing
+environment. The interface determines what registers are used and what
+the sizes are for various C data types.
+
+Some processing environments prefer using 32-bit applications even when
+running on Intel 64-bit platforms. Consider the i386 psABI, which is a
+very old 32-bit ABI for Intel 64-bit platforms. The i386 psABI does not
+provide efficient use and access of the Intel 64-bit processor
+resources, leaving the system underutilized. Now consider the x86_64
+psABI. This ABI is newer and uses 64-bits for data sizes and program
+pointers. The extra bits increase the footprint size of the programs,
+libraries, and also increases the memory and file system size
+requirements. Executing under the x32 psABI enables user programs to
+utilize CPU and system resources more efficiently while keeping the
+memory footprint of the applications low. Extra bits are used for
+registers but not for addressing mechanisms.
+
+The Yocto Project supports the final specifications of x32 psABI as
+follows:
+
+- You can create packages and images in x32 psABI format on x86_64
+ architecture targets.
+
+- You can successfully build recipes with the x32 toolchain.
+
+- You can create and boot ``core-image-minimal`` and
+ ``core-image-sato`` images.
+
+- There is RPM Package Manager (RPM) support for x32 binaries.
+
+- There is support for large images.
+
+To use the x32 psABI, you need to edit your ``conf/local.conf``
+configuration file as follows::
+
+ MACHINE = "qemux86-64"
+ DEFAULTTUNE = "x86-64-x32"
+ baselib = "${@d.getVar('BASE_LIB:tune-' + (d.getVar('DEFAULTTUNE') \
+ or 'INVALID')) or 'lib'}"
+
+Once you have set
+up your configuration file, use BitBake to build an image that supports
+the x32 psABI. Here is an example::
+
+ $ bitbake core-image-sato
+