aboutsummaryrefslogtreecommitdiffstats
path: root/tools/testing/selftests/bpf/verifier
AgeCommit message (Collapse)Author
2020-05-02bpf: Forbid XADD on spilled pointers for unprivileged usersJann Horn
commit 6e7e63cbb023976d828cdb22422606bf77baa8a9 upstream. When check_xadd() verifies an XADD operation on a pointer to a stack slot containing a spilled pointer, check_stack_read() verifies that the read, which is part of XADD, is valid. However, since the placeholder value -1 is passed as `value_regno`, check_stack_read() can only return a binary decision and can't return the type of the value that was read. The intent here is to verify whether the value read from the stack slot may be used as a SCALAR_VALUE; but since check_stack_read() doesn't check the type, and the type information is lost when check_stack_read() returns, this is not enforced, and a malicious user can abuse XADD to leak spilled kernel pointers. Fix it by letting check_stack_read() verify that the value is usable as a SCALAR_VALUE if no type information is passed to the caller. To be able to use __is_pointer_value() in check_stack_read(), move it up. Fix up the expected unprivileged error message for a BPF selftest that, until now, assumed that unprivileged users can use XADD on stack-spilled pointers. This also gives us a test for the behavior introduced in this patch for free. In theory, this could also be fixed by forbidding XADD on stack spills entirely, since XADD is a locked operation (for operations on memory with concurrency) and there can't be any concurrency on the BPF stack; but Alexei has said that he wants to keep XADD on stack slots working to avoid changes to the test suite [1]. The following BPF program demonstrates how to leak a BPF map pointer as an unprivileged user using this bug: // r7 = map_pointer BPF_LD_MAP_FD(BPF_REG_7, small_map), // r8 = launder(map_pointer) BPF_STX_MEM(BPF_DW, BPF_REG_FP, BPF_REG_7, -8), BPF_MOV64_IMM(BPF_REG_1, 0), ((struct bpf_insn) { .code = BPF_STX | BPF_DW | BPF_XADD, .dst_reg = BPF_REG_FP, .src_reg = BPF_REG_1, .off = -8 }), BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_FP, -8), // store r8 into map BPF_MOV64_REG(BPF_REG_ARG1, BPF_REG_7), BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP), BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -4), BPF_ST_MEM(BPF_W, BPF_REG_ARG2, 0, 0), BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), BPF_EXIT_INSN(), BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_8, 0), BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN() [1] https://lore.kernel.org/bpf/20200416211116.qxqcza5vo2ddnkdq@ast-mbp.dhcp.thefacebook.com/ Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)") Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200417000007.10734-1-jannh@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-23bpf, test_verifier: switch bpf_get_stack's 0 s> r8 testDaniel Borkmann
[ no upstream commit ] Switch the comparison, so that is_branch_taken() will recognize that below branch is never taken: [...] 17: [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...] 17: (67) r8 <<= 32 18: [...] R8_w=inv(id=0,smax_value=-4294967296,umin_value=9223372036854775808,umax_value=18446744069414584320,var_off=(0x8000000000000000; 0x7fffffff00000000)) [...] 18: (c7) r8 s>>= 32 19: [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...] 19: (6d) if r1 s> r8 goto pc+16 [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...] [...] Currently we check for is_branch_taken() only if either K is source, or source is a scalar value that is const. For upstream it would be good to extend this properly to check whether dst is const and src not. For the sake of the test_verifier, it is probably not needed here: # ./test_verifier 101 #101/p bpf_get_stack return R0 within range OK Summary: 1 PASSED, 0 SKIPPED, 0 FAILED I haven't seen this issue in test_progs* though, they are passing fine: # ./test_progs-no_alu32 -t get_stack Switching to flavor 'no_alu32' subdirectory... #20 get_stack_raw_tp:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED # ./test_progs -t get_stack #20 get_stack_raw_tp:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-23bpf: Test_verifier, bpf_get_stack return value add <0John Fastabend
commit 9ac26e9973bac5716a2a542e32f380c84db2b88c upstream. With current ALU32 subreg handling and retval refine fix from last patches we see an expected failure in test_verifier. With verbose verifier state being printed at each step for clarity we have the following relavent lines [I omit register states that are not necessarily useful to see failure cause], #101/p bpf_get_stack return R0 within range FAIL Failed to load prog 'Success'! [..] 14: (85) call bpf_get_stack#67 R0_w=map_value(id=0,off=0,ks=8,vs=48,imm=0) R3_w=inv48 15: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) 15: (b7) r1 = 0 16: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 16: (bf) r8 = r0 17: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) 17: (67) r8 <<= 32 18: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smax_value=9223372032559808512, umax_value=18446744069414584320, var_off=(0x0; 0xffffffff00000000), s32_min_value=0, s32_max_value=0, u32_max_value=0, var32_off=(0x0; 0x0)) 18: (c7) r8 s>>= 32 19 R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=2147483647, var32_off=(0x0; 0xffffffff)) 19: (cd) if r1 s< r8 goto pc+16 R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=0, var32_off=(0x0; 0xffffffff)) 20: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=0, R9=inv48 20: (1f) r9 -= r8 21: (bf) r2 = r7 22: R2_w=map_value(id=0,off=0,ks=8,vs=48,imm=0) 22: (0f) r2 += r8 value -2147483648 makes map_value pointer be out of bounds After call bpf_get_stack() on line 14 and some moves we have at line 16 an r8 bound with max_value 48 but an unknown min value. This is to be expected bpf_get_stack call can only return a max of the input size but is free to return any negative error in the 32-bit register space. The C helper is returning an int so will use lower 32-bits. Lines 17 and 18 clear the top 32 bits with a left/right shift but use ARSH so we still have worst case min bound before line 19 of -2147483648. At this point the signed check 'r1 s< r8' meant to protect the addition on line 22 where dst reg is a map_value pointer may very well return true with a large negative number. Then the final line 22 will detect this as an invalid operation and fail the program. What we want to do is proceed only if r8 is positive non-error. So change 'r1 s< r8' to 'r1 s> r8' so that we jump if r8 is negative. Next we will throw an error because we access past the end of the map value. The map value size is 48 and sizeof(struct test_val) is 48 so we walk off the end of the map value on the second call to get bpf_get_stack(). Fix this by changing sizeof(struct test_val) to 24 by using 'sizeof(struct test_val) / 2'. After this everything passes as expected. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/158560426019.10843.3285429543232025187.stgit@john-Precision-5820-Tower Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-02bpf: update jmp32 test cases to fix range bound deductionDaniel Borkmann
[ no upstream commit ] Since commit f2d67fec0b43 ("bpf: Undo incorrect __reg_bound_offset32 handling") has been backported to stable, we also need to update related test cases that started to (expectedly) fail on stable. Given the functionality has been reverted we need to move the result to REJECT. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-06selftests: bpf: Add test for JMP32 JSET BPF_X with upper bits setLuke Nelson
The existing tests attempt to check that JMP32 JSET ignores the upper bits in the operand registers. However, the tests missed one such bug in the x32 JIT that is only uncovered when a previous instruction pollutes the upper 32 bits of the registers. This patch adds a new test case that catches the bug by first executing a 64-bit JSET to pollute the upper 32-bits of the temporary registers, followed by a 32-bit JSET which should ignore the upper 32 bits. Co-developed-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200305234416.31597-2-luke.r.nels@gmail.com
2019-12-19bpf: Add further test_verifier cases for record_func_keyDaniel Borkmann
Expand dummy prog generation such that we can easily check on return codes and add few more test cases to make sure we keep on tracking pruning behavior. # ./test_verifier [...] #1066/p XDP pkt read, pkt_data <= pkt_meta', bad access 1 OK #1067/p XDP pkt read, pkt_data <= pkt_meta', bad access 2 OK Summary: 1580 PASSED, 0 SKIPPED, 0 FAILED Also verified that JIT dump of added test cases looks good. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/df7200b6021444fd369376d227de917357285b65.1576789878.git.daniel@iogearbox.net
2019-11-24selftests/bpf: Add verifier tests for better jmp32 register boundsYonghong Song
Three test cases are added. Test 1: jmp32 'reg op imm'. Test 2: jmp32 'reg op reg' where dst 'reg' has unknown constant and src 'reg' has known constant Test 3: jmp32 'reg op reg' where dst 'reg' has known constant and src 'reg' has unknown constant Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121170651.449096-1-yhs@fb.com
2019-10-04bpf: Add loop test case with 32 bit reg comparison against 0Daniel Borkmann
Add a loop test with 32 bit register against 0 immediate: # ./test_verifier 631 #631/p taken loop with back jump to 1st insn, 2 OK Disassembly: [...] 1b: test %edi,%edi 1d: jne 0x0000000000000014 [...] Pretty much similar to prior "taken loop with back jump to 1st insn" test case just as jmp32 variant. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com>
2019-09-06Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Daniel Borkmann says: ==================== The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) Add the ability to use unaligned chunks in the AF_XDP umem. By relaxing where the chunks can be placed, it allows to use an arbitrary buffer size and place whenever there is a free address in the umem. Helps more seamless DPDK AF_XDP driver integration. Support for i40e, ixgbe and mlx5e, from Kevin and Maxim. 2) Addition of a wakeup flag for AF_XDP tx and fill rings so the application can wake up the kernel for rx/tx processing which avoids busy-spinning of the latter, useful when app and driver is located on the same core. Support for i40e, ixgbe and mlx5e, from Magnus and Maxim. 3) bpftool fixes for printf()-like functions so compiler can actually enforce checks, bpftool build system improvements for custom output directories, and addition of 'bpftool map freeze' command, from Quentin. 4) Support attaching/detaching XDP programs from 'bpftool net' command, from Daniel. 5) Automatic xskmap cleanup when AF_XDP socket is released, and several barrier/{read,write}_once fixes in AF_XDP code, from Björn. 6) Relicense of bpf_helpers.h/bpf_endian.h for future libbpf inclusion as well as libbpf versioning improvements, from Andrii. 7) Several new BPF kselftests for verifier precision tracking, from Alexei. 8) Several BPF kselftest fixes wrt endianess to run on s390x, from Ilya. 9) And more BPF kselftest improvements all over the place, from Stanislav. 10) Add simple BPF map op cache for nfp driver to batch dumps, from Jakub. 11) AF_XDP socket umem mapping improvements for 32bit archs, from Ivan. 12) Add BPF-to-BPF call and BTF line info support for s390x JIT, from Yauheni. 13) Small optimization in arm64 JIT to spare 1 insns for BPF_MOD, from Jerin. 14) Fix an error check in bpf_tcp_gen_syncookie() helper, from Petar. 15) Various minor fixes and cleanups, from Nathan, Masahiro, Masanari, Peter, Wei, Yue. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-09-05selftests/bpf: precision tracking testsAlexei Starovoitov
Add two tests to check that stack slot marking during backtracking doesn't trigger 'spi > allocated_stack' warning. One test is using BPF_ST insn. Another is using BPF_STX. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28selftests/bpf: add precision tracking testAlexei Starovoitov
Copy-paste of existing test "calls: cross frame pruning - liveness propagation" but ran with different parentage chain heuristic which stresses different path in precision tracking logic. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28selftests/bpf: verifier precise testsAlexei Starovoitov
Use BPF_F_TEST_STATE_FREQ flag to check that precision tracking works as expected by comparing every step it takes. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller
Merge conflict of mlx5 resolved using instructions in merge commit 9566e650bf7fdf58384bb06df634f7531ca3a97e. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-08-13Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski
Daniel Borkmann says: ==================== The following pull-request contains BPF updates for your *net-next* tree. There is a small merge conflict in libbpf (Cc Andrii so he's in the loop as well): for (i = 1; i <= btf__get_nr_types(btf); i++) { t = (struct btf_type *)btf__type_by_id(btf, i); if (!has_datasec && btf_is_var(t)) { /* replace VAR with INT */ t->info = BTF_INFO_ENC(BTF_KIND_INT, 0, 0); <<<<<<< HEAD /* * using size = 1 is the safest choice, 4 will be too * big and cause kernel BTF validation failure if * original variable took less than 4 bytes */ t->size = 1; *(int *)(t+1) = BTF_INT_ENC(0, 0, 8); } else if (!has_datasec && kind == BTF_KIND_DATASEC) { ======= t->size = sizeof(int); *(int *)(t + 1) = BTF_INT_ENC(0, 0, 32); } else if (!has_datasec && btf_is_datasec(t)) { >>>>>>> 72ef80b5ee131e96172f19e74b4f98fa3404efe8 /* replace DATASEC with STRUCT */ Conflict is between the two commits 1d4126c4e119 ("libbpf: sanitize VAR to conservative 1-byte INT") and b03bc6853c0e ("libbpf: convert libbpf code to use new btf helpers"), so we need to pick the sanitation fixup as well as use the new btf_is_datasec() helper and the whitespace cleanup. Looks like the following: [...] if (!has_datasec && btf_is_var(t)) { /* replace VAR with INT */ t->info = BTF_INFO_ENC(BTF_KIND_INT, 0, 0); /* * using size = 1 is the safest choice, 4 will be too * big and cause kernel BTF validation failure if * original variable took less than 4 bytes */ t->size = 1; *(int *)(t + 1) = BTF_INT_ENC(0, 0, 8); } else if (!has_datasec && btf_is_datasec(t)) { /* replace DATASEC with STRUCT */ [...] The main changes are: 1) Addition of core parts of compile once - run everywhere (co-re) effort, that is, relocation of fields offsets in libbpf as well as exposure of kernel's own BTF via sysfs and loading through libbpf, from Andrii. More info on co-re: http://vger.kernel.org/bpfconf2019.html#session-2 and http://vger.kernel.org/lpc-bpf2018.html#session-2 2) Enable passing input flags to the BPF flow dissector to customize parsing and allowing it to stop early similar to the C based one, from Stanislav. 3) Add a BPF helper function that allows generating SYN cookies from XDP and tc BPF, from Petar. 4) Add devmap hash-based map type for more flexibility in device lookup for redirects, from Toke. 5) Improvements to XDP forwarding sample code now utilizing recently enabled devmap lookups, from Jesper. 6) Add support for reporting the effective cgroup progs in bpftool, from Jakub and Takshak. 7) Fix reading kernel config from bpftool via /proc/config.gz, from Peter. 8) Fix AF_XDP umem pages mapping for 32 bit architectures, from Ivan. 9) Follow-up to add two more BPF loop tests for the selftest suite, from Alexei. 10) Add perf event output helper also for other skb-based program types, from Allan. 11) Fix a co-re related compilation error in selftests, from Yonghong. ==================== Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-08-01selftests/bpf: tests for jmp to 1st insnAlexei Starovoitov
Add 2 tests that check JIT code generation to jumps to 1st insn. 1st test is similar to syzbot reproducer. The backwards branch is never taken at runtime. 2nd test has branch to 1st insn that executes. The test is written as two bpf functions, since it's not possible to construct valid single bpf program that jumps to 1st insn. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com>
2019-07-25selftests/bpf: Add selftests for bpf_perf_event_outputAllan Zhang
Software event output is only enabled by a few prog types. This test is to ensure that all supported types are enabled for bpf_perf_event_output successfully. Signed-off-by: Allan Zhang <allanzhang@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-23selftests/bpf: add another gso_segs accessEric Dumazet
Use BPF_REG_1 for source and destination of gso_segs read, to exercise "bpf: fix access to skb_shared_info->gso_segs" fix. Signed-off-by: Eric Dumazet <edumazet@google.com> Suggested-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-18selftests/bpf: fix "valid read map access into a read-only array 1" on s390Ilya Leoshkevich
This test looks up a 32-bit map element and then loads it using a 64-bit load. This does not work on s390, which is a big-endian machine. Since the point of this test doesn't seem to be loading a smaller value using a larger load, simply use a 32-bit load. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-16selftests/bpf: fix "alu with different scalars 1" on s390Ilya Leoshkevich
BPF_LDX_MEM is used to load the least significant byte of the retrieved test_val.index, however, on big-endian machines it ends up retrieving the most significant byte. Change the test to load the whole int in order to make it endianness-independent. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-15selftests/bpf: add selftests for wide loadsStanislav Fomichev
Mirror existing wide store tests with wide loads. The only significant difference is expected error string. Cc: Yonghong Song <yhs@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-15selftests/bpf: rename verifier/wide_store.c to verifier/wide_access.cStanislav Fomichev
Move the file and rename internal BPF_SOCK_ADDR define to BPF_SOCK_ADDR_STORE. This selftest will be extended in the next commit with the wide loads. Cc: Yonghong Song <yhs@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Two cases of overlapping changes, nothing fancy. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08selftests/bpf: add verifier tests for wide storesStanislav Fomichev
Make sure that wide stores are allowed at proper (aligned) addresses. Note that user_ip6 is naturally aligned on 8-byte boundary, so correct addresses are user_ip6[0] and user_ip6[2]. msg_src_ip6 is, however, aligned on a 4-byte bondary, so only msg_src_ip6[1] can be wide-stored. Cc: Andrii Nakryiko <andriin@fb.com> Cc: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-03selftests: bpf: add tests for shifts by zeroLuke Nelson
There are currently no tests for ALU64 shift operations when the shift amount is 0. This adds 6 new tests to make sure they are equivalent to a no-op. The x32 JIT had such bugs that could have been caught by these tests. Cc: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-20Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Alexei Starovoitov says: ==================== pull-request: bpf-next 2019-06-19 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) new SO_REUSEPORT_DETACH_BPF setsocktopt, from Martin. 2) BTF based map definition, from Andrii. 3) support bpf_map_lookup_elem for xskmap, from Jonathan. 4) bounded loops and scalar precision logic in the verifier, from Alexei. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-19selftests/bpf: add basic verifier tests for loopsAlexei Starovoitov
This set of tests is a rewrite of Edward's earlier tests: https://patchwork.ozlabs.org/patch/877221/ Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-19selftests/bpf: fix testsAlexei Starovoitov
Fix tests that assumed no loops. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-19selftests/bpf: fix tests due to const spill/fillAlexei Starovoitov
fix tests that incorrectly assumed that the verifier cannot track constants through stack. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-13bpf: fix div64 overflow tests to properly detect errorsNaveen N. Rao
If the result of the division is LLONG_MIN, current tests do not detect the error since the return value is truncated to a 32-bit value and ends up being 0. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-10tools/bpf: Add bpf_map_lookup_elem selftest for xskmapJonathan Lemon
Check that bpf_map_lookup_elem lookup and structure access operats correctly. Signed-off-by: Jonathan Lemon <jonathan.lemon@gmail.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-06-10bpf: Allow bpf_map_lookup_elem() on an xskmapJonathan Lemon
Currently, the AF_XDP code uses a separate map in order to determine if an xsk is bound to a queue. Instead of doing this, have bpf_map_lookup_elem() return a xdp_sock. Rearrange some xdp_sock members to eliminate structure holes. Remove selftest - will be added back in later patch. Signed-off-by: Jonathan Lemon <jonathan.lemon@gmail.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-29selftests: bpf: complete sub-register zero extension checksJiong Wang
eBPF ISA specification requires high 32-bit cleared when only low 32-bit sub-register is written. JIT back-ends must guarantee this semantics when doing code-gen. This patch complete unit tests for all of those insns that could be visible to JIT back-ends and defining sub-registers, if JIT back-ends failed to guarantee the mentioned semantics, these unit tests will fail. Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-29selftests: bpf: move sub-register zero extension checks into subreg.cJiong Wang
It is better to centralize all sub-register zero extension checks into an independent file. This patch takes the first step to move existing sub-register zero extension checks into subreg.c. Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-23selftests: bpf: add zero extend checks for ALU32 and/or/xorBjörn Töpel
Add three tests to test_verifier/basic_instr that make sure that the high 32-bits of the destination register is cleared after an ALU32 and/or/xor. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-13bpf: add various test cases for backward jumpsDaniel Borkmann
Add a couple of tests to make sure branch(/call) offset adjustments are correctly performed. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Minor conflict with the DSA legacy code removal. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Three trivial overlapping conflicts. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-01bpf, x32: Fix bug for BPF_JMP | {BPF_JSGT, BPF_JSLE, BPF_JSLT, BPF_JSGE}Wang YanQing
The current method to compare 64-bit numbers for conditional jump is: 1) Compare the high 32-bit first. 2) If the high 32-bit isn't the same, then goto step 4. 3) Compare the low 32-bit. 4) Check the desired condition. This method is right for unsigned comparison, but it is buggy for signed comparison, because it does signed comparison for low 32-bit too. There is only one sign bit in 64-bit number, that is the MSB in the 64-bit number, it is wrong to treat low 32-bit as signed number and do the signed comparison for it. This patch fixes the bug and adds a testcase in selftests/bpf for such bug. Signed-off-by: Wang YanQing <udknight@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-27bpf: Add verifier tests for the bpf_sk_storageMartin KaFai Lau
This patch adds verifier tests for the bpf_sk_storage: 1. ARG_PTR_TO_MAP_VALUE_OR_NULL 2. Map and helper compatibility (e.g. disallow bpf_map_loookup_elem) It also takes this chance to remove the unused struct btf_raw_data and uses the BTF encoding macros from "test_btf.h". Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-26selftests: bpf: test writable buffers in raw tpsMatt Mullins
This tests that: * a BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE cannot be attached if it uses either: * a variable offset to the tracepoint buffer, or * an offset beyond the size of the tracepoint buffer * a tracer can modify the buffer provided when attached to a writable tracepoint in bpf_prog_test_run Signed-off-by: Matt Mullins <mmullins@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-25selftests/bpf: test cases for pkt/null checks in subprogsPaul Chaignon
The first test case, for pointer null checks, is equivalent to the following pseudo-code. It checks that the verifier does not complain on line 6 and recognizes that ptr isn't null. 1: ptr = bpf_map_lookup_elem(map, &key); 2: ret = subprog(ptr) { 3: return ptr != NULL; 4: } 5: if (ret) 6: value = *ptr; The second test case, for packet bound checks, is equivalent to the following pseudo-code. It checks that the verifier does not complain on line 7 and recognizes that the packet is at least 1 byte long. 1: pkt_end = ctx.pkt_end; 2: ptr = ctx.pkt + 8; 3: ret = subprog(ptr, pkt_end) { 4: return ptr <= pkt_end; 5: } 6: if (ret) 7: value = *(u8 *)ctx.pkt; Signed-off-by: Paul Chaignon <paul.chaignon@orange.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-16selftests/bpf: two scale testsAlexei Starovoitov
Add two tests to check that sequence of 1024 jumps is verifiable. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-12selftests/bpf: Test ARG_PTR_TO_LONG arg typeAndrey Ignatov
Test that verifier handles new argument types properly, including uninitialized or partially initialized value, misaligned stack access, etc. Example of output: #456/p ARG_PTR_TO_LONG uninitialized OK #457/p ARG_PTR_TO_LONG half-uninitialized OK #458/p ARG_PTR_TO_LONG misaligned OK #459/p ARG_PTR_TO_LONG size < sizeof(long) OK #460/p ARG_PTR_TO_LONG initialized OK Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-04-12 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) Improve BPF verifier scalability for large programs through two optimizations: i) remove verifier states that are not useful in pruning, ii) stop walking parentage chain once first LIVE_READ is seen. Combined gives approx 20x speedup. Increase limits for accepting large programs under root, and add various stress tests, from Alexei. 2) Implement global data support in BPF. This enables static global variables for .data, .rodata and .bss sections to be properly handled which allows for more natural program development. This also opens up the possibility to optimize program workflow by compiling ELFs only once and later only rewriting section data before reload, from Daniel and with test cases and libbpf refactoring from Joe. 3) Add config option to generate BTF type info for vmlinux as part of the kernel build process. DWARF debug info is converted via pahole to BTF. Latter relies on libbpf and makes use of BTF deduplication algorithm which results in 100x savings compared to DWARF data. Resulting .BTF section is typically about 2MB in size, from Andrii. 4) Add BPF verifier support for stack access with variable offset from helpers and add various test cases along with it, from Andrey. 5) Extend bpf_skb_adjust_room() growth BPF helper to mark inner MAC header so that L2 encapsulation can be used for tc tunnels, from Alan. 6) Add support for input __sk_buff context in BPF_PROG_TEST_RUN so that users can define a subset of allowed __sk_buff fields that get fed into the test program, from Stanislav. 7) Add bpf fs multi-dimensional array tests for BTF test suite and fix up various UBSAN warnings in bpftool, from Yonghong. 8) Generate a pkg-config file for libbpf, from Luca. 9) Dump program's BTF id in bpftool, from Prashant. 10) libbpf fix to use smaller BPF log buffer size for AF_XDP's XDP program, from Magnus. 11) kallsyms related fixes for the case when symbols are not present in BPF selftests and samples, from Daniel ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-09bpf, selftest: test {rd, wr}only flags and direct value accessDaniel Borkmann
Extend test_verifier with various test cases around the two kernel extensions, that is, {rd,wr}only map support as well as direct map value access. All passing, one skipped due to xskmap not present on test machine: # ./test_verifier [...] #948/p XDP pkt read, pkt_meta' <= pkt_data, bad access 1 OK #949/p XDP pkt read, pkt_meta' <= pkt_data, bad access 2 OK #950/p XDP pkt read, pkt_data <= pkt_meta', good access OK #951/p XDP pkt read, pkt_data <= pkt_meta', bad access 1 OK #952/p XDP pkt read, pkt_data <= pkt_meta', bad access 2 OK Summary: 1410 PASSED, 1 SKIPPED, 0 FAILED Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Minor comment merge conflict in mlx5. Staging driver has a fixup due to the skb->xmit_more changes in 'net-next', but was removed in 'net'. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-05selftests/bpf: Test unbounded var_off stack accessAndrey Ignatov
Test the case when reg->smax_value is too small/big and can overflow, and separately min and max values outside of stack bounds. Example of output: # ./test_verifier #856/p indirect variable-offset stack access, unbounded OK #857/p indirect variable-offset stack access, max out of bound OK #858/p indirect variable-offset stack access, min out of bound OK Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-05selftests/bpf: Test indirect var_off stack access in unpriv modeAndrey Ignatov
Test that verifier rejects indirect stack access with variable offset in unprivileged mode and accepts same code in privileged mode. Since pointer arithmetics is prohibited in unprivileged mode verifier should reject the program even before it gets to helper call that uses variable offset, at the time when that variable offset is trying to be constructed. Example of output: # ./test_verifier ... #859/u indirect variable-offset stack access, priv vs unpriv OK #859/p indirect variable-offset stack access, priv vs unpriv OK Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-05selftests/bpf: Test indirect var_off stack access in raw modeAndrey Ignatov
Test that verifier rejects indirect access to uninitialized stack with variable offset. Example of output: # ./test_verifier ... #859/p indirect variable-offset stack access, uninitialized OK Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-04selftests/bpf: synthetic tests to push verifier limitsAlexei Starovoitov
Add a test to generate 1m ld_imm64 insns to stress the verifier. Bump the size of fill_ld_abs_vlan_push_pop test from 4k to 29k and jump_around_ld_abs from 4k to 5.5k. Larger sizes are not possible due to 16-bit offset encoding in jump instructions. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>