aboutsummaryrefslogtreecommitdiffstats
path: root/recipes-core/initscripts
AgeCommit message (Collapse)Author
2015-04-14luv-test-manager: Reboot after tests completeHEADmasterMatt Fleming
For the purposes of wiring up LUV into an automated test environment, it's handy to have the machine automatically reboot after a configurable timeout period. It's important that we don't synchronously wait for the reboot to occur and continue to drop the user at a shell on the serial console, otherwise we'll lose one of the most useful methods of debugging. Read the timeout period, measured in seconds, from the EFI variable named LuvTimeout with GUID 3b6bf55d-a6f6-45cf-9f7e-ebf3bdadf74e. If the variable doesn't exist, use a default timeout value of 5 minutes. The reboot functionality can be disabled by specifying the "luv.noreboot" kernel command line parameter. Acked-by: Ricardo Neri <ricardo.neri@intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2015-04-14luv-test-manager: Add luv.halt cmdline option for powering offMatt Fleming
When running automated tests it's super handy to be able to power off the machine once the tests have completed. This signals to any monitoring processes/machines that results are ready to be inspected. If you've got a smart PDU you should be able to query it for the machine's status to know when it's off, but a poor man's solution would be to ping the machine and wait a couple of seconds extra once it disappears from the network. Reviewed-by: Naresh Bhat <naresh.bhat@linaro.org> Tested-by: Naresh Bhat <naresh.bhat@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2015-03-27luv-test-parser: Add timestamp field to schemaMatt Fleming
Including a timestamp field in the output of the test results is useful for gathering performance data, i.e. how long did each unit test take to run. Increment the schema version to v1.1 while retaining backwards compatibility with v1.0. This allows parsers to be upgraded piece-meal to the new schema. Upgrading parsers individually is very powerful, since we don't currently have any way to gather timestamp data from BITS, and can't update the BITS parser to v1.1 of the schema right now. Cc: Gayatri Kammela <gayatri.kammela@intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
2014-11-05luv-test-manager: show luv release version in luv.resultsRicardo Neri
As more LUV versions are released and the user base grows, it is important to know which particular version of LUV a given user is running. Knowing the version makes it easier to provide support and comments regarding bugs and supported features. Containing a summary of all the tests, luv.results is a good place to print the LUV version. The version is pulled from the /etc/issue file, which is updated with every release. While here, update also the welcome message in the console to depict the LUV version. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-11-05luv: Manually flush stdout from gawkMatt Fleming
The internal buffering that gawk does makes the test output pretty useless, because it's not always possible to tell which test is currently running. For example, a test may have completed but the output will not appear on the screen until the output buffer fills and is subsequently flushed. Effectively all the unit test results from a single test suite are output as one block. The wakealarm test from fwts provides a good illustration of the user-visible problem. This test takes a number of seconds to complete, but because all the fwts results are output in one go, it's not possible to attribute delays to any one individual unit test. Explicitly flush all open file descriptors and pipes anytime that we print something from gawk. This gives much better user interaction when looking at the serial console because it's now possible to figure out which tests have the longest latencies. Whenever a unit test begins execution a message will be printed on the serial console immediately, e.g. [+] wakealarm... and when it finishes (in this case after multiple seconds) the result will be printed too, [+] wakealarm... passed Tested-by: Gayatri Kammela <gayatri.kammela@intel.com> Tested-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-11-05luv-test-manager: Write test results directly to consoleMatt Fleming
There's some latency somewhere in the test result pipeline, and it's impossible to trace noticable hangs when writing test results to the console to the offending unit test. Just simplify the pipeline, and pipe the result output directly to the console and results files instead of passing it through another instance of gawk before it hits the console, since hunting down buffer-related delays in gawk is extremely tedious. There's no user-visible change with this patch - it's preparatory work for a later patch that aggressively flushes the gawk output buffer. Tested-by: Gayatri Kammela <gayatri.kammela@intel.com> Tested-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-09-30luv-test-manager: Be more verbose on what do when tests are completeRicardo Neri
LuvOS is intended to be an automated testing tool. Thus, it does really require much interaction from the user other than inserting the bootable media, and removing it when tests are complete. Thus, we inform better to the user what to do when the tests are complete. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
2014-09-30bits: Install tests and parserMatt Fleming
bits comes with its own version of the grub bootloader, with custom modules installed as part of the grub image, for example a python module to interpret the python tests. We must install this boot loader alongside our default one, along with the necessary parsers and test runners to extract the results of the bits tests from userland. Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-17luv-crash-handler: Save dmesg buffer to non-volatile mediaRicardo Neri
The dmesg buffer can provide valuable information to determine the causes of a kernel crash. When the systems boots to runlevel 3, the luv-crash-handler uses vmcore-dmesg to recover the dmesg buffer from the vmcore dump and saves it to the LuvOS test results partition. This information can then be sent to the developers to investigate further on the causes of the crash. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-17luv-test-manager: Remove waiting time for removable mediaRicardo Neri
The waiting is done already by the luv-crash-handler. Thus, there is not need to repeat the waiting here. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-06-26initscritpts: Add a crash handlerRicardo Neri
LuvOS is used to validate UEFI firmware implementations. Such implemen- tations may be at different levels of maturity. Thus, they could pose stability threats to the Linx kernel. It could also be possibly that the kernel itself may have an undiscovered bug. The crash handler aims to handle such situations in a gracefully manner. During a regular boot, at runlevel 5, the handler uses kexec to prepare the system for an eventual crash so that the system can be rebooted to a usable state. In such recovery boot, memory dumps can be made to perform analyses on what caused the crash. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
2014-06-26luv-test-manager: Save results to non-volatile mediaRicardo Neri
Add functionality to save the test results to non-volatile media. The non-volatile media is expected to be a disk identified by its UUID. The disk is mounted using the UUID to avoid writing inadvertently writing to any other disk present in the system. Clearly, this implementation requires the presence of a disk with the UUID specified by the script. Thus, it is required to utilize such UUID when creating the filesystem. Also, disks in some systems take a while to be ready for mount and may not be ready when the LUV test manager runs. Thus, a delay of 5 seconds is introduced in order to increase the likelihood of finding the intended disk. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-06-26luv-test-manager: Add kernel parameter to disable luv testsMatt Fleming
Adding "noluv" to the kernel command line will skip the luv tests on boot, which is useful in conjunction with "psplash=false" to boot the machine without displaying the splash screen. Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-06-26luv-test-manager: Save individual test suite resultsMatt Fleming
Users may want access to individual test suite results especially when trying to diagnose *why* a particular unit test failed. The results are saved in their native output format, e.g. before any post-processing of the results has occurred, which should provide more context for the failure. Make /var/log/luv/<test suite> the canonical location for results. Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-06-26psplash: Enable support for framebuffer splash screenMatt Fleming
By using psplash we can write helpful message and the framebuffer along with a progress report. psplash has been extended slightly by adding a new keyword "DONE" which signals that progress no longer needs to be monitored and the progress bar is deleted. Also, we're using the luv project colours. Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-06-26luv-test-manager: Display pass, fail and skip countMatt Fleming
It's helpful to summarise these numbers once all the test suites have been run, as it gives concrete values instead of the user being required to count results as they scroll up the screen. Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-04-03luv-test: Provide a default log parserMatt Fleming
Most test programs will need custom output parsers that know how to parse the test results and write them to stdout in a format suitable for luv-test-manager to consume. This has necessitated moving /etc/luv-tests to /etc/luv/tests and creating a new /etc/luv/parsers directory to contain testsuite parsers. The schema expected by the test manager is detailed in luv-test-parser. Any major additions to the schema should bump the expected verison. Remember, the log data may not be generated and consumed on the same machine. For example it's possible to save the log data over the network and process the logs offline to graph result trends over time. Any parsers written for test programs should refrain from using either the version of sed or grep built with busybox. The reason being that they have strange buffering properties on output which make the tests appear to "hang" for long periods of time. The parsers were written in awk because it does not suffer from this output buffering problem, allowing the creation of an arbitrary pipeline without noticable delay to the user. Also, awk provide a C-like syntax that is pretty straight forward to understand even for people without much familiarity with it. While perl would have also been a suitable choice it isn't currently installed in the core-image-efi image. However, people should feel free to use whatever tool they like. Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-03-27initscripts: Add Test ManagerMatt Fleming
We need a way of automatically running test programs when the machine boots. Install a luv-test-manager script and invoke it from runlevel 5. luv-test-manager is a test manager, it is the file that drives the entire test process. It searches for test runner scripts in /etc/luv-tests/ and executes any that it finds. Currently it only handles execution of tests but later on it will also collect their output and parse it into a standard format. Signed-off-by: Matt Fleming <matt.fleming@intel.com>