Age | Commit message (Collapse) | Author |
|
Add "True" to d.getVar() to avoid forking a little while longer.
This got broken in commit c63e8c0d95604ee5 "swupd-image.bbclass: dont fail parsing if SWUPD_BUNDLES is undefined"
Signed-off-by: Aaron Zinghini <aaron.zinghini@seeingmachines.com>
|
|
ERROR: Error for ...image.bb, dependency ${@' does not contain exactly one ':' character.
Task 'depends' should be specified in the form 'packagename:task'
ERROR: Command execution failed: Exited with 1
Signed-off-by: André Draszik <git@andred.net>
|
|
It is useful to be able to integrate swupd-client into an image that
is not itself subject to swupd based processing.
An example would be an initramfs that contains the client, but that
initramfs itself is a regular file in a different (outer) file system
(image). The outer image would be subject to swupd processing, and the
inner initramfs is simply responsible to update the outer file system
during system (re)start.
Having split all swupd-client specific functionality into its own class,
the initramfs image recipe can now inherit the client specific class,
and benefit from correct contents for files in /usr/share/defaults/swupd,
correct public keys, and correct URLs.
Signed-off-by: André Draszik <git@andred.net>
|
|
Add "True" to d.getVar() to avoid forking a little while longer.
The expand arg has been set to true by default in master branch and breaks compatibility with branches.
Signed-off-by: Aaron Zinghini <aaron.zinghini@seeingmachines.com>
|
|
The time command's output is merely informational and less relevant
now that performance is better than it used to be. Calling it
unconditional is problematic because some build hosts might not have
it.
By default the command is no longer used, but can still be enabled
locally by setting SWUPD_TIMING_CMD = "time" in local.conf or
site.conf.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Sharing data between virtual images only worked when rm_work.bbclass
was not active.
To support rm_work.bbclass, the new do_swupd_list_bundle generates the
necessary information about the rootfs before do_rm_work removes the
rootfs. The output files and the mega image rootfs.tar get excluded
from the cleanup via the new RM_WORK_EXCLUDE_ITEMS.
While at it, some inaccurate comments get removed.
As a side effect of the more granular work split, it is now possible
to make swupd images depend on exactly those bundle images that they
contain. Now it is possible to build a swupd image without first
having to build all swupd images, which might speed up a build (less
work on the critical path).
Fixes: [YOCTO #10799]
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Creating the mega image archive is on the critical path (depends on
all target components having been compiled and blocks creating
images). Compressing, even with pbzip, is slower than directly writing
the uncompressed archive (tested with a striped RAID array of two
traditional hard drives and a fast multicore CPU) and decompression
again takes additional time, so avoid the slowdown by not compressing.
The downside is higher disk space usage.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
There are reasons for format changes, the upcoming tools update being
one of them. When the format changes, swupd-image.bbclass must build
two OS versions from the same rootfs: once with the old format, once
with the new format.
OS_VERSION is used as number for the new format, OS_VERSION - 1 for
the old one. OS_VERSION must be high enough such that OS_VERSION - 1
is still available. Usually it is, but there's also a sanity check for
that.
When changing the format because of a change in the tools, then both
old and new swupd-server are needed, so now recipe and installed files
include the tool format version.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Besides being an integer, it also must be in the signed int32 range
supported by swupd.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Internally, swupd_make_pack mostly just spends its time on a single
CPU while compressing a large .tar archive. We can shorten the overall
execution time by running swupd_make_pack invocations in parallel.
The current approach just runs all of them at once. This might
overload small machines or larger ones when the number of bundles
increases, so some more intelligence might be needed.
Depends on a fix for background processes in the bitbake shell parser
(YOCTO #10668).
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
The previous approaches all relied on somehow carrying additional data
across from one build to the next (sstate or additional archives in
the deploy directory).
The new approach replaces that with downloading required content on a
file-by-file basis from the normal update repo when (and not sooner)
it is needed by swupd_create_pack.
That works for meta-swupd because the format of the files (compressed
archive created with bsdtar) is expected to be stable. If a change
ever becomes necessary, some backward compatibilty mode would have to
be added or deltas simply would be skipped again.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Putting the database under the deploy directory automatically
makes it specific to the OS_VERSION and removing the deploy
directory also removes the corresponding pseudo database, thus
ensuring a clean rebuild with a single command.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Not removing the directories is okay: typically we don't
build incrementally, and we can remove any remaining ones before
invoking swupd.
Not removing a tempory directory tree may also have performance
benefits, but the even better solution will be to not write the tree
in the first place by calling libarchive directly.
Related-to: https://bugzilla.yoctoproject.org/show_bug.cgi?id=10623
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
It is better to start each OS_VERSION build with a clean pseudo
database because then performance is expected to be better. Only
relevant for repeated local builds; CI builds already start from
scratch.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Instead of granting all virtual images access to the mega rootfs under
a shared pseudo instance, archive the mega rootfs in an archive and
extract from that the subset of entries that are needed.
Sharing pseudo instances is slow: using more than one avoids a
potential bottleneck (the pseudo daemon is often 100% busy on a CPU
during heavy use). Extracting files with full attributes also is
faster when merely sweeping through a tar archive, at least when most
of the content is needed.
This change therefore increases performance.
bsdtar with support for --no-recursive in combination with -x is
needed for that. Current bsdtar master does not support that yet, but
adding it was easy.
GNU tar already supports that, but had bugs in that mode ("Not found
in archive" errors for entries that were in the archive).
bsdtar is also nicer for other reasons and therefore was extended instead
of trying to fix GNU tar:
- no need to explicitly add xattrs
- guaranteed to auto-detect compression, even when reading from
stdin (GNU tar can only do that when working with files); not
that relevant here, though
- uses less system calls when creating files, which should
help a bit with performance under pseudo
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Creating updates based on the Manifest.full of the previous build
allows reusing unchanged files, i.e. work on compressing these file
and the storing the result again under "files" gets avoided.
This works by referencing the previous version in the new Manifest
files. The implication of that is that versions no longer can be
published separately. The content produced by all previous builds must
also be available to the client.
This is independent of computing deltas. Nothing besides the previous
"www" content needs to be available. It gets downloaded automatically
when starting a build without a previous swupd deploy directory, so no
extra work is needed to enable this mode besides publishing the
previous build results.
Fixes [YOCTO #9189]
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
The settings affecting the swupd client belong to the image, not the
swupd client recipe. That way, different images can use different
settings while sharing the same swupd client.
Creating the bundles directory was broken in the swupd-client recipe
and also not needed because swupd-image.bbclass does it, too.
This will also allow implementing better update repo generation
(incremental, supporting format changes, etc.) because now
swupd-image.bbclass has access to the settings.
The installed swupd client must match the format of the update repo
for that OS_VERSION. To ensure that, swupd-image.bbclass now adds a
dependency on a virtual "swupd-client-format<format number>" and
suitable swupd client recipe(s) provide that.
Distros then have two ways of choosing a swupd client version,
should that ever be necessary:
- first they need to override the per-image format default value
- then set the preferred swupd client version, if there is more
than one for that format
TODO: installing the SSL pubkey into the image after a file change
does not work.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
This removes the storing of previous build information in sstate. It
was conceptually questionable (sstate is a cache which does not need
to be backed up, while the information about previous builds is
crucial and must not get lost) and not working:
- the -map.inc file wasn't actually included anywhere and thus
the old build information wasn't getting restored
- restoring all previous builds would have made building slower
and slower as the number of previous builds grows
- the old build information lacked the www/Manifest files that
incremental updates need
The replacement puts previous build information into the image deploy
directory. That's tentative and also not fully working.
The automatic selection of old versions to build deltas against also
gets replaced with an explicit choice that has to be made by the user
of meta-swupd. That's because in practice, incremental updates are
more useful when prepared for the releases that actually run on the
target device, like major milestones.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
deploy/swupd
During development it is useful to wipe out deploy/swupd. This
simulates the "start from scratch" situation in the Ostro OS
CI. Previously it was necessary to force-run do_stage_swupd_inputs and
do_swupd_update after removing the directory, now this is fully
automatic.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
That PN is different in the base image and virtual images led to
various places which had to distinguish between the two. We can
simplify that by introducing a variable SWUPD_IMAGE_PN which
always has the PN value of the base image.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
do_swupd_update itself unpacks the tar archives that swupd-server
needs and therefore does no longer depend on sharing the pseudo
database with the other tasks and virtual images.
Using a separate pseudo DB speeds up
"ostro-image-swupd:do_stage_swupd_inputs
ostro-image-swupd:do_swupd_update ostro-image-swupd-dev" (two tasks
which run in parallel because both depend on the same full rootfs and
which used to share the same pseudo instance) from 25 to 16 minutes.
The pseudo data directory is intentionally inside the deploy/swupd
directory. There it can be deleted and re-created for testing swupd
update generation with:
rm -rf tmp*/deploy/swupd
bitbake -f <image>:do_stage_swupd_inputs <image>:do_swupd_update
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
When creating bundle images, we need to know and copy also the entries
that we exclude from processing by swupd-server. This could be done
with a more complex syntax for the .content.txt files, but that would
also make the swupd-server patches more complicated.
Instead, an .extra-content.txt gets written alongside the
.content.text and meta-swupd uses that when copying files into
images. Due to the way how this is implemented, the .extra-content.txt
of bundles also lists the files that were excluded from the bundle
because they were already in the os-core. This may or may not be
desirable.
This change also includes some other improvements (consistent use of the
helper method, sorting the content of the file lists).
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Storing the build rootfs in the sstate-cache has drawbacks:
- it's questionable whether storing data that cannot be re-created really
belongs into a cache which (by definition) should only contain data which
may get lost
- while it looks attractive to re-use an existing mechanism for sharing data
across builds, it's not a complete solution because the map still needs to
be carried across builds
- using the sstate-cache mechanism adds additional, large copy operations on
the critical path towards completing a build
- the code isn't quite mature yet, sometimes do_stage_swupd_inputs_setscene
fails: sstate_setscene(d) fails: No such file or directory:
'.../tmp-glibc/work/qemux86-ostro-linux/ostro-image-swupd/1.0-r0/sstate-install-stage_swupd_inputs/92909520' ->
'.../tmp-glibc/work/qemux86-ostro-linux/ostro-image-swupd/1.0-r0/swupd-image/92909520'
It might be better to define an explicit "shared build directory"
where the current image directory of a build can be stored for future
use.
In the meantime, disable the mechanism by default to speed up builds
inside a CI system (like the one from Ostro) which is not prepared to
use the mechanism anyway.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Compression with xz is slowing down do_stage_swupd_inputs (on the
critical path) by keeping one CPU 100% with xz. gzip compresses
faster and (at least for now) on-disk usage matters less than speed.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
do_stage_swupd_updates works with the entire full tree multiple times:
copying into the staging area, packing it as sstate archive, copying
to the swupd deploy directory.
Copying directory trees is slow, in particular when running under
pseudo, and do_stage_swupd_updates is on the critical path for
completing a build. Therefore it should be as fast as possible.
Storing the directory as compressed archive is faster: it cuts down
the time for do_stage_swupd_updates from 11min in the Ostro CI to
6min. This is with xz as compression method, which is suitable for
long-term archival (good compression) but a lot slower than gzip
(https://www.rootusers.com/gzip-vs-bzip2-vs-xz-performance-comparison/). When
favoring speed, using gzip may be better.
The long-term goal (dream?) is to have swupd work directly with tar
archives, in which case expanding the archive and pseudo could be
avoided altogether.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Include the log output of the swupd tools in the normal stdout/stderr
logfile. That way errors are immediately visible when invoked from
bitbake and in the Ostro OS CI (which only shows the bitbake output).
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Sharing of pseudo databases was broken, leading to files with
wrong attributes: ${IMAGE_BASENAME} is different among all virtual
recipes and thus updating PSEUDO_LOCALSTATEDIR did not have the
desired effect.
Bundle recipes do not need to copy from anything (and thus they do not
depend on the mega image do_image) and also do not need to share the
pseudo database, because all that matters is the list of entries in
their rootfs. Being very specific about the task dependencies allows
more long-running image creation tasks to run in parallel.
Distinguishing between the various virtual image recipes and the base
image is a bit tricky. Therefore the "(virtual) swupd image recipes"
(called so because they get created by swupdimage.bbclass) now unsets
BUNDLE_NAME (thus removing the default "os-core" which is set in the
base recipe) and the usage of PN, PN_BASE, and BUNDLE_NAME is
explained in a comment.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Creating individual bundle directories as input for swupd is a waste
of resources and time, because swupd is just going to recreate the
"full" tree anyway.
With an improved swupd-server, we can just copy the full tree once and
then define the content of each bundle with a text file. This replaces
the "files-in-image" files. Those were used only by meta-swupd before.
They were renamed because they not only list files, but also
directories. "content" is a bit more neutral. Creating them is now
done in pure Python and integrated with the SWUPD_FILE_BLACKLIST
mechanism. That way, the content files are correct right away, which
allows removing the post-processing code (for example,
sanitise_file_list()).
The special mode of obtaining bundle content from the package manager
instead of a full rootfs gets dropped for now. If that mode can be
shown to be noticably faster then full rootfs creation, then it can be
re-added such that it also only produces a content file for the
bundle.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
Using libarchive directly avoids one fork/exec per file in
swupd-make-fullfiles, which improves performance. Several
regressions in the new upstream version had to be fixed
as part of the version update.
The version got updated to make it easier to upstream the libarchive
patch. The latest upstream version actually is 3.2.7, but that version
introduces a format change. Updating to that will require further work
and preparations. Luckily, the source code patches apply cleanly to
3.2.5 and 3.2.7.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
When buildstats detects TaskSucceeded for a do_rootfs task it
will try and determine the size of the rootfs using du, if the
rootfs directory isn't present the call to du fails which triggers
a bb.error.
Since e587c50c2639989d02d282c7a91134d5934eb042 do_rootfs for
swumpdimage based virtual images has been an empty function which
simply returns as soon as it's invoked, as we populate the rootfs
from the mega rootfs contents in do_image. Due to this the rootfs
directory wasn't yet present at the time buildstats detects the
TaskSucceeded and the subsequent call to du fails.
Work around this by creating an empty rootfs directory during
do_rootfs in swupdimage.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
OE-core commit 6d969bacc718e changed do_rootfs so that it creates
IMGDEPLOYDIR. That change broke the creation of additional swupd
images, because setting do_rootfs to empty caused the entire task to
be skipped, including the evaluation of the 'cleandirs' task
attribute.
It remains to be seen whether that's really the desired behavior (see
https://bugzilla.yoctoproject.org/show_bug.cgi?id=10256), but as it is
what it is right now, we need to avoid the situation by overwriting
do_rootfs with non-empty code that doesn't do anything. That way, the
directory gets created.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
|
|
In case IMAGE_BASENAME is set on image recipe level the files ownership on
target rootfs is incorrect for recipes inheriting swupd-image.bbclass.
Depending on the context swupd-image.bbclass used either PN (PN_BASE) or
IMAGE_BASENAME when generating path to pseudo shared state directory. This
seems correct only when IMAGE_BASENAME is not set as it defaults to PN.
This patch resolves above problem.
Addresses [YOCTO #10108].
Signed-off-by: Piotr Figiel <p.figiel@camlintechnologies.com>
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
Ensure pseudo is available in the sysroot for all tasks which have
the fakeroot flag by adding virtual/fakeroot-native:do_populate_sysroot
to the depends of the tasks.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
Drop and the swupd_sanity_check_image task and related
SWUPD_IMAGE_SANITY_CHECKS variable in favour of the recently added
OE-Core image QA mechanism.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
This was a copy-and-paste of the check_output() method of the subprocess
module in order to support Python versions prior to 2.7 -- we should just
use the method from subprocess directly.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
Improve the usability of fetching swupd inputs from sstate objects by
writing all known OS_VERSION-->sstate object hash mappings to a variable
assignment in an inc file.
Utilising the ability to fetch swupd inputs for previous versions then
becomes a simple case of including this file, i.e. in an auto.conf or
local.conf.
As the inc file is parsed during a build all known mappings, i.e. those
loaded from the inc file and a new mapping generated by the current build,
will be written to any newly generated inc file during a build.
Preventing swupd inputs from OS_VERSIONS from being fetched and staged
becomes a simple matter of editing the inc file to remove the no longer
required maps.
An expected workflow for building new OS_VERSIONS with a CI would be:
* copy any existing inc file from previous builds to a conf dir such as
${BUILDDIR}/conf
* edit the file to ensure only versions we care about are fetched
* ensure that file is included/required by a conf file such as local.conf
or auto.conf
* build
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
The addtask line for do_swupd_update contained a task which is no longer
defined by swupd-image.bbclass
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
We don't run do_stage_swupd_inputs for derived images, resulting in
no sstate object being genereted. Therefore we must also skip
do_stage_swupd_inputs_setscene for derived images.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
Ensure the manifest files are written with a uniform filename
pattern for all swupd-image derived images.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
We now ensure the do_swupd_update task is run for the base image as
a dependency of the do_swupd_update task for any derivative image.
Therefore this message is now informative, rather than indicative
of an issue.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
We already have a variable which lists all of the bundles for the
image including os-core, make use of ALL_BUNDLES instead of
assigning a new variable to use.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
This change aims to help support more complex OS development
workflows where update artefacts:
* may not be sequential
* may have been generated on a separate system, or in a separate
build directory
With this change we now generate a file after the swupd inputs have
been staged and an sstate object for the inputs created which maps
the OS_VERSION of the build to the name of the sstate object.
A new task will read in these 'map' files and try to ensure that
the swupd inputs are available before do_swupd_update, first by
checking for an sstate objects in SSTATE_DIR and when not present
attempting to fetch the object from an sstate mirror, before
unpacking the object into the swupd directory for processing.
[YOCTO #9321]
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
Dictionaries don't have an iteritems() method in Python 3, use the
items() method instead.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
The manifest files are used for various things so we must be sure
they are available, even if the swupd inputs were staged from a
shared state.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
do_swupd_update is skipped for non-core images, however as users
may choose to only build a composed os-core + bundles image ensure
that the do_swupd_update task for all such images depends on the
core do_swupd_update task of the core image.
This ensures that the swupd update stream is generated for a new
OS release when suitable (i.e. when an update stream doesn't already
exist for that OS_VERSION).
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
swupdimage was not ported when we moved the various helper methods
out of swupd-image into the lib/swupd module -- this change
rectifies that.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
SWUPD_FILE_BLACKLIST allows a user to list files, with a path in
the target rootfs, that they do not wish to be copied into the
swupd state directory for processing.
This mechanism can be used to prevent files being processed by
swupd which should not be tracked in a manifest file and thus not
processed by swupd-client.
The primary use for this is to exclude files in /etc which are
runtime modified (/etc/mtab) or generated at boot (/etc/machine-id)
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
swupd-client doesn't support bundles being removed from the server,
if a bundle receipt still exists on the client but the manifest
disapears from the server the client doesn't know how to handle it.
Prevent a workaround for this by adding a SWUPD_EMPTY_BUNDLES
variable which can be used to continue to provide a manifest entry
for a bundle which is otherwise empty. With this workaround the
client can update to the empty bundle, removing files that it used
to provide (unless they are now provided by an alternative bundle).
Note: this was implemented as a separate variable, rather than
allowing SWUPD_BUNDLES to be defined without a corresponding
BUNDLE_CONTENTS varflag as it is expected the latter is more likely
to lead to unexpected results due to accidental misconfiguration.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|
|
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
|