Platform Development with Poky
Software development Poky supports several methods of software development. You can use the method that is best for you. This chapter describes each development method.
External Development Using the Poky SDK The meta-toolchain and meta-toolchain-sdk targets (see the images section) build tarballs that contain toolchains and libraries suitable for application development outside of Poky. These tarballs unpack into the /opt/poky directory and contain a setup script (e.g. /opt/poky/environment-setup-i586-poky-linux, which you can source to initialize a suitable environment. Sourcing these adds the compiler, QEMU scripts, QEMU binary, a special version of pkgconfig and other useful utilities to the PATH variable. Variables to assist pkgconfig and autotools are also set so that, for example, configure can find pre-generated test results for tests that need target hardware on which to run. Using the toolchain with autotool-enabled packages is straightforward - just pass the appropriate host option to configure as in the following example: $ ./configure --host=arm-poky-linux-gnueabi For other projects it is usually a case of ensuring the cross tools are used: CC=arm-poky-linux-gnueabi-gcc and LD=arm-poky-linux-gnueabi-ld
Developing externally using the Anjuta plugin An Anjuta IDE plugin exists to make developing software within the Poky framework easier for the application developer. It presents a graphical IDE from which the developer can cross compile an application then deploy and execute the output in a QEMU emulation session. It also supports cross debugging and profiling. To use the plugin, a toolchain and SDK built by Poky is required along with Anjuta it's development headers and the Anjuta plugin. The Poky Anjuta plugin is available to download as a tarball at the OpenedHand labs page or directly from the Poky Git repository located at git://git.pokylinux.org/anjuta-poky; a web interface to the repository can be accessed at . See the README file contained in the project for more information on dependencies and building the plugin. If you want to disable remote gdb debugging, please pass --diable-gdb-integration switch when doing configure.
Setting up the Anjuta plugin Extract the tarball for the toolchain into / as root. The toolchain will be installed into /opt/poky. To use the plugin, first open or create an existing project. If creating a new project the "C GTK+" project type will allow itself to be cross-compiled. However you should be aware that this uses glade for the UI. To activate the plugin go to EditPreferences, then choose General from the left hand side. Choose the Installed plugins tab, scroll down to Poky SDK and check the box. The plugin is now activated but first it must be configured.
Configuring the Anjuta plugin The configuration options for the SDK can be found by choosing the Poky SDK icon from the left hand side. The following options need to be set: SDK root: If we use external toolchain, we need to set SDK root. this is the root directory of the SDK's sysroot. For an i586 SDK this will be /opt/poky/. This directory will contain directories named like "bin", "include", "var", etc. under your selected target architecture subdirectory /opt/poky/sysroot/i586-poky-linux/. Needed cross compile tools are under /opt/poky/sysroot/i586-pokysdk-linux/ Poky root: If we have local poky build tree, we need to set the Poky root. this is the root directory of the poky build tree, if you build your i586 target architecture under the subdirectory of build_x86 within your poky tree, the Poky root directory should be ${Poky_tree}/build_x86/. Target Architecture: this is the cross compile triplet, e.g. "i586-poky-linux". This target triplet is the prefix extracted from the set up script file name. For examle, "i586-poky-linux" is extracted from set up script file /opt/poky/environment-setup-i586-poky-linux Kernel: use the file chooser to select the kernel to use with QEMU Root filesystem: use the file chooser to select the root filesystem directory, this is the directory where you use "poky-extract-sdk" command to extract the poky-image-sdk tarball.
Using the Anjuta plugin As an example, cross-compiling a project, deploying it into QEMU and running a debugger against it and then doing a system wide profile. Choose BuildRun Configure or BuildRun Autogenerate to run "configure" (or to run "autogen") for the project. This passes command line arguments to instruct it to cross-compile. Next do BuildBuild Project to build and compile the project. If you have previously built the project in the same tree without using the cross-compiler you may find that your project fails to link. Simply do BuildClean Project to remove the old binaries. You may then try building again. Next start QEMU by using ToolsStart QEMU, this will start QEMU and will show any error messages in the message view. Once Poky has fully booted within QEMU you may now deploy into it. Once built and QEMU is running, choose ToolsDeploy, this will install the package into a temporary directory and then copy using rsync over SSH into the target. Progress and messages will be shown in the message view. To debug a program installed into onto the target choose ToolsDebug remote. This prompts for the local binary to debug and also the command line to run on the target. The command line to run should include the full path to the to binary installed in the target. This will start a gdbserver over SSH on the target and also an instance of a cross-gdb in a local terminal. This will be preloaded to connect to the server and use the SDK root to find symbols. This gdb will connect to the target and load in various libraries and the target program. You should setup any breakpoints or watchpoints now since you might not be able to interrupt the execution later. You may stop the debugger on the target using ToolsStop debugger. It is also possible to execute a command in the target over SSH, the appropriate environment will be be set for the execution. Choose ToolsRun remote to do this. This will open a terminal with the SSH command inside. To do a system wide profile against the system running in QEMU choose ToolsProfile remote. This will start up OProfileUI with the appropriate parameters to connect to the server running inside QEMU and will also supply the path to the debug information necessary to get a useful profile.
Developing externally in QEMU Running Poky QEMU images is covered in the Running an Image section. Poky's QEMU images contain a complete native toolchain. This means that applications can be developed within QEMU in the same was as a normal system. Using qemux86 on an x86 machine is fast since the guest and host architectures match, qemuarm is slower but gives faithful emulation of ARM specific issues. To speed things up these images support using distcc to call a cross-compiler outside the emulated system too. If runqemu was used to start QEMU, and distccd is present on the host system, any bitbake cross compiling toolchain available from the build system will automatically be used from within qemu simply by calling distcc (export CC="distcc" can be set in the enviroment). Alterntatively, if a suitable SDK/toolchain is present in /opt/poky it will also automatically be used. There are several options for connecting into the emulated system. QEMU provides a framebuffer interface which has standard consoles available. There is also a serial connection available which has a console to the system running on it and IP networking as standard. The images have a dropbear ssh server running with the root password disabled allowing standard ssh and scp commands to work. The images also contain an NFS server exporting the guest's root filesystem allowing that to be made available to the host.
Developing in Poky directly Working directly in Poky is a fast and effective development technique. The idea is that you can directly edit files in WORKDIR or the source directory S and then force specific tasks to rerun in order to test the changes. An example session working on the matchbox-desktop package might look like this: $ bitbake matchbox-desktop $ sh $ cd tmp/work/armv5te-poky-linux-gnueabi/matchbox-desktop-2.0+svnr1708-r0/ $ cd matchbox-desktop-2 $ vi src/main.c $ exit $ bitbake matchbox-desktop -c compile -f $ bitbake matchbox-desktop Here, we build the package, change into the work directory for the package, change a file, then recompile the package. Instead of using sh like this, you can also use two different terminals. The risk with working like this is that a command like unpack could wipe out the changes you've made to the work directory so you need to work carefully. It is useful when making changes directly to the work directory files to do so using quilt as detailed in the modifying packages with quilt section. The resulting patches can be copied into the recipe directory and used directly in the SRC_URI. For a review of the skills used in this section see Sections 2.1.1 and 2.4.2.
Developing with 'devshell' When debugging certain commands or even to just edit packages, the 'devshell' can be a useful tool. To start it you run a command like: $ bitbake matchbox-desktop -c devshell which will open a terminal with a shell prompt within the Poky environment. This means PATH is setup to include the cross toolchain, the pkgconfig variables are setup to find the right .pc files, configure will be able to find the Poky site files etc. Within this environment, you can run configure or compile command as if they were being run by Poky itself. You are also changed into the source (S) directory automatically. When finished with the shell just exit it or close the terminal window. The default shell used by devshell is the gnome-terminal. Other forms of terminal can also be used by setting the TERMCMD and TERMCMDRUN variables in local.conf. For examples of the other options available, see meta/conf/bitbake.conf. An external shell is launched rather than opening directly into the original terminal window to make interaction with bitbakes multiple threads easier and also allow a client/server split of bitbake in the future (devshell will still work over X11 forwarding or similar). It is worth remembering that inside devshell you need to use the full compiler name such as arm-poky-linux-gnueabi-gcc instead of just gcc and the same applies to other applications from gcc, bintuils, libtool etc. Poky will have setup environmental variables such as CC to assist applications, such as make, find the correct tools.
Developing within Poky with an external SCM based package If you're working on a recipe which pulls from an external SCM it is possible to have Poky notice new changes added to the SCM and then build the latest version. This only works for SCMs where its possible to get a sensible revision number for changes. Currently it works for svn, git and bzr repositories. To enable this behaviour it is simply a case of adding SRCREV_pn- PN = "${AUTOREV}" to local.conf where PN is the name of the package for which you want to enable automatic source revision updating.
Debugging with GDB Remotely GDB (The GNU Project Debugger) allows you to examine running programs to understand and fix problems and also to perform postmortem style analsys of program crashes. It is available as a package within poky and installed by default in sdk images. It works best when -dbg packages for the application being debugged are installed as the extra symbols give more meaningful output from GDB. Sometimes, due to memory or disk space constraints, it is not possible to use GDB directly on the remote target to debug applications. This is due to the fact that GDB needs to load the debugging information and the binaries of the process being debugged. GDB then needs to perform many computations to locate information such as function names, variable names and values, stack traces, etc. even before starting the debugging process. This places load on the target system and can alter the characteristics of the program being debugged. This is where GDBSERVER comes into play as it runs on the remote target and does not load any debugging information from the debugged process. Instead, the debugging information processing is done by a GDB instance running on a distant computer - the host GDB. The host GDB then sends control commands to GDBSERVER to make it stop or start the debugged program, as well as read or write some memory regions of that debugged program. All the debugging information loading and processing as well as the heavy debugging duty is done by the host GDB, giving the GDBSERVER running on the target a chance to remain small and fast. As the host GDB is responsible for loading the debugging information and doing the necessary processing to make actual debugging happen, the user has to make sure it can access the unstripped binaries complete with their debugging information and compiled with no optimisations. The host GDB must also have local access to all the libraries used by the debugged program. On the remote target the binaries can remain stripped as GDBSERVER does not need any debugging information there. However they must also be compiled without optimisation matching the host's binaries. The binary being debugged on the remote target machine is hence referred to as the 'inferior' in keeping with GDB documentation and terminology. Further documentation on GDB, is available on on their site.
Launching GDBSERVER on the target First, make sure gdbserver is installed on the target. If not, install the gdbserver package (which needs the libthread-db1 package). To launch GDBSERVER on the target and make it ready to "debug" a program located at /path/to/inferior, connect to the target and launch: $ gdbserver localhost:2345 /path/to/inferior After that, gdbserver should be listening on port 2345 for debugging commands coming from a remote GDB process running on the host computer. Communication between the GDBSERVER and the host GDB will be done using TCP. To use other communication protocols please refer to the GDBSERVER documentation.
Launching GDB on the host computer Running GDB on the host computer takes a number of stages, described in the following sections.
Build the cross GDB package A suitable gdb cross binary is required which runs on your host computer but knows about the the ABI of the remote target. This can be obtained from the the Poky toolchain, e.g. /opt/poky/sysroots/x86_64-pokysdk-linux/usr/bin/armv5te-poky-linux-gnueabi/arm-poky-linux-gnueabi-gdb which "x86_64" is the host architecture, "arm" is the target architecture and "linux-gnueabi" the target ABI. Alternatively this can be built directly by Poky. To do this you would build the gdb-cross package so for example you would run: bitbake gdb-cross Once built, the cross gdb binary can be found at tmp/sysroots/<host-arch>/usr/bin/\ <target-arch>-poky-<target-abi>/<target-arch>-poky-<target-abi>-gdb
Making the inferior binaries available The inferior binary needs to be available to GDB complete with all debugging symbols in order to get the best possible results along with any libraries the inferior depends on and their debugging symbols. There are a number of ways this can be done. Perhaps the easiest is to have an 'sdk' image corresponding to the plain image installed on the device. In the case of 'pky-image-sato', 'poky-image-sdk' would contain suitable symbols. The sdk images already have the debugging symbols installed so its just a question expanding the archive to some location and telling GDB where this is. Alternatively, poky can build a custom directory of files for a specific debugging purpose by reusing its tmp/rootfs directory, on the host computer in a slightly different way to normal. This directory contains the contents of the last built image. This process assumes the image running on the target was the last image to be built by Poky, the package foo contains the inferior binary to be debugged has been built without without optimisation and has debugging information available. Firstly you want to install the foo package to tmp/rootfs by doing: tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \ tmp/work/<target-abi>/poky-image-sato-1.0-r0/opkg.conf -o \ tmp/rootfs/ update then, tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \ tmp/work/<target-abi>/poky-image-sato-1.0-r0/opkg.conf \ -o tmp/rootfs install foo tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \ tmp/work/<target-abi>/poky-image-sato-1.0-r0/opkg.conf \ -o tmp/rootfs install foo-dbg which installs the debugging information too.
Launch the host GDB To launch the host GDB, run the cross gdb binary identified above with the inferior binary specified on the commandline: <target-arch>-poky-<target-abi>-gdb rootfs/usr/bin/foo This loads the binary of program foo as well as its debugging information. Once the gdb prompt appears, you must instruct GDB to load all the libraries of the inferior from tmp/rootfs: set solib-absolute-prefix /path/to/tmp/rootfs where /path/to/tmp/rootfs must be the absolute path to tmp/rootfs or wherever the binaries with debugging information are located. Now, tell GDB to connect to the GDBSERVER running on the remote target: target remote remote-target-ip-address:2345 Where remote-target-ip-address is the IP address of the remote target where the GDBSERVER is running. 2345 is the port on which the GDBSERVER is running.
Using the Debugger Debugging can now proceed as normal, as if the debugging were being done on the local machine, for example to tell GDB to break in the main function, for instance: break main and then to tell GDB to "continue" the inferior execution, continue For more information about using GDB please see the project's online documentation at .
Profiling with OProfile OProfile is a statistical profiler well suited to finding performance bottlenecks in both userspace software and the kernel. It provides answers to questions like "Which functions does my application spend the most time in when doing X?". Poky is well integrated with OProfile to make profiling applications on target hardware straightforward. To use OProfile you need an image with OProfile installed. The easiest way to do this is with "tools-profile" in IMAGE_FEATURES. You also need debugging symbols to be available on the system where the analysis will take place. This can be achieved with "dbg-pkgs" in IMAGE_FEATURES or by installing the appropriate -dbg packages. For successful call graph analysis the binaries must preserve the frame pointer register and hence should be compiled with the "-fno-omit-framepointer" flag. In Poky this can be achieved with SELECTED_OPTIMIZATION = "-fexpensive-optimizations -fno-omit-framepointer -frename-registers -O2" or by setting DEBUG_BUILD = "1" in local.conf (the latter will also add extra debug information making the debug packages large).
Profiling on the target All the profiling work can be performed on the target device. A simple OProfile session might look like: # opcontrol --reset # opcontrol --start --separate=lib --no-vmlinux -c 5 [do whatever is being profiled] # opcontrol --stop $ opreport -cl Here, the reset command clears any previously profiled data, OProfile is then started. The options used to start OProfile mean dynamic library data is kept separately per application, kernel profiling is disabled and callgraphing is enabled up to 5 levels deep. To profile the kernel, you would specify the --vmlinux=/path/to/vmlinux option (the vmlinux file is usually in /boot/ in Poky and must match the running kernel). The profile is then stopped and the results viewed with opreport with options to see the separate library symbols and callgraph information. Callgraphing means OProfile not only logs infomation about which functions time is being spent in but also which functions called those functions (their parents) and which functions that function calls (its children). The higher the callgraphing depth, the more accurate the results but this also increased the loging overhead so it should be used with caution. On ARM, binaries need to have the frame pointer enabled for callgraphing to work (compile with the gcc option -fno-omit-framepointer). For more information on using OProfile please see the OProfile online documentation at .
Using OProfileUI A graphical user interface for OProfile is also available. You can either use prebuilt Debian packages from the OpenedHand repository or download and build from svn at http://svn.o-hand.com/repos/oprofileui/trunk/. If the "tools-profile" image feature is selected, all necessary binaries are installed onto the target device for OProfileUI interaction. In order to convert the data in the sample format from the target to the host the opimport program is needed. This is not included in standard Debian OProfile packages but an OProfile package with this addition is also available from the OpenedHand repository. We recommend using OProfile 0.9.3 or greater. Other patches to OProfile may be needed for recent OProfileUI features, but Poky usually includes all needed patches on the target device. Please see the OProfileUI README for up to date information, and the OProfileUI website for more information on the OProfileUI project.
Online mode This assumes a working network connection with the target hardware. In this case you just need to run "oprofile-server" on the device. By default it listens on port 4224. This can be changed with the --port command line option. The client program is called oprofile-viewer. The UI is relatively straightforward, the key functionality is accessed through the buttons on the toolbar (which are duplicated in the menus.) These buttons are: Connect - connect to the remote host, the IP address or hostname for the target can be supplied here. Disconnect - disconnect from the target. Start - start the profiling on the device. Stop - stop the profiling on the device and download the data to the local host. This will generate the profile and show it in the viewer. Download - download the data from the target, generate the profile and show it in the viewer. Reset - reset the sample data on the device. This will remove the sample information that was collected on a previous sampling run. Ensure you do this if you do not want to include old sample information. Save - save the data downloaded from the target to another directory for later examination. Open - load data that was previously saved. The behaviour of the client is to download the complete 'profile archive' from the target to the host for processing. This archive is a directory containing the sample data, the object files and the debug information for said object files. This archive is then converted using a script included in this distribution ('oparchconv') that uses 'opimport' to convert the archive from the target to something that can be processed on the host. Downloaded archives are kept in /tmp and cleared up when they are no longer in use. If you wish to profile into the kernel, this is possible, you just need to ensure a vmlinux file matching the running kernel is available. In Poky this is usually located in /boot/vmlinux-KERNELVERSION, where KERNEL-version is the version of the kernel e.g. 2.6.23. Poky generates separate vmlinux packages for each kernel it builds so it should be a question of just ensuring a matching package is installed ( opkg install kernel-vmlinux. These are automatically installed into development and profiling images alongside OProfile. There is a configuration option within the OProfileUI settings page where the location of the vmlinux file can be entered. Waiting for debug symbols to transfer from the device can be slow and it's not always necessary to actually have them on device for OProfile use. All that is needed is a copy of the filesystem with the debug symbols present on the viewer system. The GDB remote debug section covers how to create such a directory with Poky and the location of this directory can again be specified in the OProfileUI settings dialog. If specified, it will be used where the file checksums match those on the system being profiled.
Offline mode If no network access to the target is available an archive for processing in 'oprofile-viewer' can be generated with the following set of command. # opcontrol --reset # opcontrol --start --separate=lib --no-vmlinux -c 5 [do whatever is being profiled] # opcontrol --stop # oparchive -o my_archive Where my_archive is the name of the archive directory where you would like the profile archive to be kept. The directory will be created for you. This can then be copied to another host and loaded using 'oprofile-viewer''s open functionality. The archive will be converted if necessary.