aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel/probes/kprobes.c
AgeCommit message (Collapse)Author
2023-02-07arm64: kprobes: Drop ID map text from kprobes blacklistArd Biesheuvel
The ID mapped text region is never accessed via the normal kernel mapping of text, and so it was moved into .rodata instead. This means it is no longer considered as a suitable place for kprobes by default, and the explicit blacklist is unnecessary, and actually results in an error message at boot: kprobes: Failed to populate blacklist (error -22), kprobes not restricted, be careful using them! So stop blacklisting the ID map text explicitly. Fixes: af7249b317e4 ("arm64: kernel: move identity map out of .text mapping") Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20230204101807.2862321-1-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-12-05arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRKMasami Hiramatsu (Google)
Return DBG_HOOK_ERROR if kprobes can not handle a BRK because it fails to find a kprobe corresponding to the address. Since arm64 kprobes uses stop_machine based text patching for removing BRK, it ensures all running kprobe_break_handler() is done at that point. And after removing the BRK, it removes the kprobe from its hash list. Thus, if the kprobe_break_handler() fails to find kprobe from hash list, there is a bug. Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/166994753273.439920.6629626290560350760.stgit@devnote3 Signed-off-by: Will Deacon <will@kernel.org>
2022-12-05arm64: kprobes: Let arch do_page_fault() fix up page fault in user handlerMasami Hiramatsu (Google)
Since arm64's do_page_fault() can handle the page fault correctly than kprobe_fault_handler() according to the context, let it handle the page fault instead of simply call fixup_exception() in the kprobe_fault_handler(). Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/166994752269.439920.4801339965959400456.stgit@devnote3 Signed-off-by: Will Deacon <will@kernel.org>
2022-09-30arm64/kprobe: Optimize the performance of patching single-step slotLiao Chang
Single-step slot would not be used until kprobe is enabled, that means no race condition occurs on it under SMP, hence it is safe to pacth ss slot without stopping machine. Since I and D caches are coherent within single-step slot from aarch64_insn_patch_text_nosync(), hence no need to do it again via flush_icache_range(). Acked-by: Will Deacon <will@kernel.org> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Liao Chang <liaochang1@huawei.com> Link: https://lore.kernel.org/r/20220927022435.129965-4-liaochang1@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-04-29arm64: Treat ESR_ELx as a 64-bit registerAlexandru Elisei
In the initial release of the ARM Architecture Reference Manual for ARMv8-A, the ESR_ELx registers were defined as 32-bit registers. This changed in 2018 with version D.a (ARM DDI 0487D.a) of the architecture, when they became 64-bit registers, with bits [63:32] defined as RES0. In version G.a, a new field was added to ESR_ELx, ISS2, which covers bits [36:32]. This field is used when the Armv8.7 extension FEAT_LS64 is implemented. As a result of the evolution of the register width, Linux stores it as both a 64-bit value and a 32-bit value, which hasn't affected correctness so far as Linux only uses the lower 32 bits of the register. Make the register type consistent and always treat it as 64-bit wide. The register is redefined as an "unsigned long", which is an unsigned double-word (64-bit quantity) for the LP64 machine (aapcs64 [1], Table 1, page 14). The type was chosen because "unsigned int" is the most frequent type for ESR_ELx and because FAR_ELx, which is used together with ESR_ELx in exception handling, is also declared as "unsigned long". The 64-bit type also makes adding support for architectural features that use fields above bit 31 easier in the future. The KVM hypervisor will receive a similar update in a subsequent patch. [1] https://github.com/ARM-software/abi-aa/releases/download/2021Q3/aapcs64.pdf Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220425114444.368693-4-alexandru.elisei@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-10-22arm64: kprobes: Record frame pointer with kretprobe instanceMasami Hiramatsu
Record the frame pointer instead of stack address with kretprobe instance as the identifier on the instance list. Since arm64 always enable CONFIG_FRAME_POINTER, we can use the actual frame pointer (x29). This will allow the stacktrace code to find the original return address from the FP alone. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-09-30kprobes: treewide: Make it harder to refer kretprobe_trampoline directlyMasami Hiramatsu
Since now there is kretprobe_trampoline_addr() for referring the address of kretprobe trampoline code, we don't need to access kretprobe_trampoline directly. Make it harder to refer by renaming it to __kretprobe_trampoline(). Link: https://lkml.kernel.org/r/163163045446.489837.14510577516938803097.stgit@devnote2 Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-09-30kprobes: treewide: Remove trampoline_address from kretprobe_trampoline_handler()Masami Hiramatsu
The __kretprobe_trampoline_handler() callback, called from low level arch kprobes methods, has the 'trampoline_address' parameter, which is entirely superfluous as it basically just replicates: dereference_kernel_function_descriptor(kretprobe_trampoline) In fact we had bugs in arch code where it wasn't replicated correctly. So remove this superfluous parameter and use kretprobe_trampoline_addr() instead. Link: https://lkml.kernel.org/r/163163044546.489837.13505751885476015002.stgit@devnote2 Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-09-30kprobes: treewide: Cleanup the error messages for kprobesMasami Hiramatsu
This clean up the error/notification messages in kprobes related code. Basically this defines 'pr_fmt()' macros for each files and update the messages which describes - what happened, - what is the kernel going to do or not do, - is the kernel fine, - what can the user do about it. Also, if the message is not needed (e.g. the function returns unique error code, or other error message is already shown.) remove it, and replace the message with WARN_*() macros if suitable. Link: https://lkml.kernel.org/r/163163036568.489837.14085396178727185469.stgit@devnote2 Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-06-28Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "There's a reasonable amount here and the juicy details are all below. It's worth noting that the MTE/KASAN changes strayed outside of our usual directories due to core mm changes and some associated changes to some other architectures; Andrew asked for us to carry these [1] rather that take them via the -mm tree. Summary: - Optimise SVE switching for CPUs with 128-bit implementations. - Fix output format from SVE selftest. - Add support for versions v1.2 and 1.3 of the SMC calling convention. - Allow Pointer Authentication to be configured independently for kernel and userspace. - PMU driver cleanups for managing IRQ affinity and exposing event attributes via sysfs. - KASAN optimisations for both hardware tagging (MTE) and out-of-line software tagging implementations. - Relax frame record alignment requirements to facilitate 8-byte alignment with KASAN and Clang. - Cleanup of page-table definitions and removal of unused memory types. - Reduction of ARCH_DMA_MINALIGN back to 64 bytes. - Refactoring of our instruction decoding routines and addition of some missing encodings. - Move entry code moved into C and hardened against harmful compiler instrumentation. - Update booting requirements for the FEAT_HCX feature, added to v8.7 of the architecture. - Fix resume from idle when pNMI is being used. - Additional CPU sanity checks for MTE and preparatory changes for systems where not all of the CPUs support 32-bit EL0. - Update our kernel string routines to the latest Cortex Strings implementation. - Big cleanup of our cache maintenance routines, which were confusingly named and inconsistent in their implementations. - Tweak linker flags so that GDB can understand vmlinux when using RELR relocations. - Boot path cleanups to enable early initialisation of per-cpu operations needed by KCSAN. - Non-critical fixes and miscellaneous cleanup" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (150 commits) arm64: tlb: fix the TTL value of tlb_get_level arm64: Restrict undef hook for cpufeature registers arm64/mm: Rename ARM64_SWAPPER_USES_SECTION_MAPS arm64: insn: avoid circular include dependency arm64: smp: Bump debugging information print down to KERN_DEBUG drivers/perf: fix the missed ida_simple_remove() in ddr_perf_probe() perf/arm-cmn: Fix invalid pointer when access dtc object sharing the same IRQ number arm64: suspend: Use cpuidle context helpers in cpu_suspend() PSCI: Use cpuidle context helpers in psci_cpu_suspend_enter() arm64: Convert cpu_do_idle() to using cpuidle context helpers arm64: Add cpuidle context save/restore helpers arm64: head: fix code comments in set_cpu_boot_mode_flag arm64: mm: drop unused __pa(__idmap_text_start) arm64: mm: fix the count comments in compute_indices arm64/mm: Fix ttbr0 values stored in struct thread_info for software-pan arm64: mm: Pass original fault address to handle_mm_fault() arm64/mm: Drop SECTION_[SHIFT|SIZE|MASK] arm64/mm: Use CONT_PMD_SHIFT for ARM64_MEMSTART_SHIFT arm64/mm: Drop SWAPPER_INIT_MAP_SIZE arm64: Conditionally configure PTR_AUTH key of the kernel. ...
2021-06-11arm64: insn: decouple patching from insn codeMark Rutland
Currently, <asm/insn.h> includes <asm/patching.h>. We intend that <asm/insn.h> will be usable from userspace, so it doesn't make sense to include headers for kernel-only features such as the patching routines, and we'd intended to restrict <asm/insn.h> to instruction encoding details. Let's decouple the patching code from <asm/insn.h>, and explicitly include <asm/patching.h> where it is needed. Since <asm/patching.h> isn't included from assembly, we can drop the __ASSEMBLY__ guards. At the same time, sort the kprobes includes so that it's easier to see what is and isn't incldued. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210609102301.17332-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-03kprobes: Do not increment probe miss count in the fault handlerNaveen N. Rao
Kprobes has a counter 'nmissed', that is used to count the number of times a probe handler was not called. This generally happens when we hit a kprobe while handling another kprobe. However, if one of the probe handlers causes a fault, we are currently incrementing 'nmissed'. The comment in fault handler indicates that this can be used to account faults taken by the probe handlers. But, this has never been the intention as is evident from the comment above 'nmissed' in 'struct kprobe': /*count the number of times this probe was temporarily disarmed */ unsigned long nmissed; Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20210601120150.672652-1-naveen.n.rao@linux.vnet.ibm.com
2021-06-01kprobes: Remove kprobe::fault_handlerPeter Zijlstra
The reason for kprobe::fault_handler(), as given by their comment: * We come here because instructions in the pre/post * handler caused the page_fault, this could happen * if handler tries to access user space by * copy_from_user(), get_user() etc. Let the * user-specified handler try to fix it first. Is just plain bad. Those other handlers are ran from non-preemptible context and had better use _nofault() functions. Also, there is no upstream usage of this. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20210525073213.561116662@infradead.org
2021-04-26Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - MTE asynchronous support for KASan. Previously only synchronous (slower) mode was supported. Asynchronous is faster but does not allow precise identification of the illegal access. - Run kernel mode SIMD with softirqs disabled. This allows using NEON in softirq context for crypto performance improvements. The conditional yield support is modified to take softirqs into account and reduce the latency. - Preparatory patches for Apple M1: handle CPUs that only have the VHE mode available (host kernel running at EL2), add FIQ support. - arm64 perf updates: support for HiSilicon PA and SLLC PMU drivers, new functions for the HiSilicon HHA and L3C PMU, cleanups. - Re-introduce support for execute-only user permissions but only when the EPAN (Enhanced Privileged Access Never) architecture feature is available. - Disable fine-grained traps at boot and improve the documented boot requirements. - Support CONFIG_KASAN_VMALLOC on arm64 (only with KASAN_GENERIC). - Add hierarchical eXecute Never permissions for all page tables. - Add arm64 prctl(PR_PAC_{SET,GET}_ENABLED_KEYS) allowing user programs to control which PAC keys are enabled in a particular task. - arm64 kselftests for BTI and some improvements to the MTE tests. - Minor improvements to the compat vdso and sigpage. - Miscellaneous cleanups. * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (86 commits) arm64/sve: Add compile time checks for SVE hooks in generic functions arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG. arm64: pac: Optimize kernel entry/exit key installation code paths arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS) arm64: mte: make the per-task SCTLR_EL1 field usable elsewhere arm64/sve: Remove redundant system_supports_sve() tests arm64: fpsimd: run kernel mode NEON with softirqs disabled arm64: assembler: introduce wxN aliases for wN registers arm64: assembler: remove conditional NEON yield macros kasan, arm64: tests supports for HW_TAGS async mode arm64: mte: Report async tag faults before suspend arm64: mte: Enable async tag check fault arm64: mte: Conditionally compile mte_enable_kernel_*() arm64: mte: Enable TCO in functions that can read beyond buffer limits kasan: Add report for async mode arm64: mte: Drop arch_enable_tagging() kasan: Add KASAN mode kernel parameter arm64: mte: Add asynchronous mode support arm64: Get rid of CONFIG_ARM64_VHE arm64: Cope with CPUs stuck in VHE mode ...
2021-04-13arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG.zhouchuangao
It can be optimized at compile time. Signed-off-by: zhouchuangao <zhouchuangao@vivo.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/1617105472-6081-1-git-send-email-zhouchuangao@vivo.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-13arm64: kprobes: Restore local irqflag if kprobes is cancelledJisheng Zhang
If instruction being single stepped caused a page fault, the kprobes is cancelled to let the page fault handler continue as a normal page fault. But the local irqflags are disabled so cpu will restore pstate with DAIF masked. After pagefault is serviced, the kprobes is triggerred again, we overwrite the saved_irqflag by calling kprobes_save_local_irqflag(). NOTE, DAIF is masked in this new saved irqflag. After kprobes is serviced, the cpu pstate is retored with DAIF masked. This patch is inspired by one patch for riscv from Liao Chang. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20210412174101.6bfb0594@xhacker.debian Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22arm64: kprobes: Fix Uexpected kernel BRK exception at EL1Qais Yousef
I was hitting the below panic continuously when attaching kprobes to scheduler functions [ 159.045212] Unexpected kernel BRK exception at EL1 [ 159.053753] Internal error: BRK handler: f2000006 [#1] PREEMPT SMP [ 159.059954] Modules linked in: [ 159.063025] CPU: 2 PID: 0 Comm: swapper/2 Not tainted 5.11.0-rc4-00008-g1e2a199f6ccd #56 [rt-app] <notice> [1] Exiting.[ 159.071166] Hardware name: ARM Juno development board (r2) (DT) [ 159.079689] pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO BTYPE=--) [ 159.085723] pc : 0xffff80001624501c [ 159.089377] lr : attach_entity_load_avg+0x2ac/0x350 [ 159.094271] sp : ffff80001622b640 [rt-app] <notice> [0] Exiting.[ 159.097591] x29: ffff80001622b640 x28: 0000000000000001 [ 159.105515] x27: 0000000000000049 x26: ffff000800b79980 [ 159.110847] x25: ffff00097ef37840 x24: 0000000000000000 [ 159.116331] x23: 00000024eacec1ec x22: ffff00097ef12b90 [ 159.121663] x21: ffff00097ef37700 x20: ffff800010119170 [rt-app] <notice> [11] Exiting.[ 159.126995] x19: ffff00097ef37840 x18: 000000000000000e [ 159.135003] x17: 0000000000000001 x16: 0000000000000019 [ 159.140335] x15: 0000000000000000 x14: 0000000000000000 [ 159.145666] x13: 0000000000000002 x12: 0000000000000002 [ 159.150996] x11: ffff80001592f9f0 x10: 0000000000000060 [ 159.156327] x9 : ffff8000100f6f9c x8 : be618290de0999a1 [ 159.161659] x7 : ffff80096a4b1000 x6 : 0000000000000000 [ 159.166990] x5 : ffff00097ef37840 x4 : 0000000000000000 [ 159.172321] x3 : ffff000800328948 x2 : 0000000000000000 [ 159.177652] x1 : 0000002507d52fec x0 : ffff00097ef12b90 [ 159.182983] Call trace: [ 159.185433] 0xffff80001624501c [ 159.188581] update_load_avg+0x2d0/0x778 [ 159.192516] enqueue_task_fair+0x134/0xe20 [ 159.196625] enqueue_task+0x4c/0x2c8 [ 159.200211] ttwu_do_activate+0x70/0x138 [ 159.204147] sched_ttwu_pending+0xbc/0x160 [ 159.208253] flush_smp_call_function_queue+0x16c/0x320 [ 159.213408] generic_smp_call_function_single_interrupt+0x1c/0x28 [ 159.219521] ipi_handler+0x1e8/0x3c8 [ 159.223106] handle_percpu_devid_irq+0xd8/0x460 [ 159.227650] generic_handle_irq+0x38/0x50 [ 159.231672] __handle_domain_irq+0x6c/0xc8 [ 159.235781] gic_handle_irq+0xcc/0xf0 [ 159.239452] el1_irq+0xb4/0x180 [ 159.242600] rcu_is_watching+0x28/0x70 [ 159.246359] rcu_read_lock_held_common+0x44/0x88 [ 159.250991] rcu_read_lock_any_held+0x30/0xc0 [ 159.255360] kretprobe_dispatcher+0xc4/0xf0 [ 159.259555] __kretprobe_trampoline_handler+0xc0/0x150 [ 159.264710] trampoline_probe_handler+0x38/0x58 [ 159.269255] kretprobe_trampoline+0x70/0xc4 [ 159.273450] run_rebalance_domains+0x54/0x80 [ 159.277734] __do_softirq+0x164/0x684 [ 159.281406] irq_exit+0x198/0x1b8 [ 159.284731] __handle_domain_irq+0x70/0xc8 [ 159.288840] gic_handle_irq+0xb0/0xf0 [ 159.292510] el1_irq+0xb4/0x180 [ 159.295658] arch_cpu_idle+0x18/0x28 [ 159.299245] default_idle_call+0x9c/0x3e8 [ 159.303265] do_idle+0x25c/0x2a8 [ 159.306502] cpu_startup_entry+0x2c/0x78 [ 159.310436] secondary_start_kernel+0x160/0x198 [ 159.314984] Code: d42000c0 aa1e03e9 d42000c0 aa1e03e9 (d42000c0) After a bit of head scratching and debugging it turned out that it is due to kprobe handler being interrupted by a tick that causes us to go into (I think another) kprobe handler. The culprit was kprobe_breakpoint_ss_handler() returning DBG_HOOK_ERROR which leads to the Unexpected kernel BRK exception. Reverting commit ba090f9cafd5 ("arm64: kprobes: Remove redundant kprobe_step_ctx") seemed to fix the problem for me. Further analysis showed that kcb->kprobe_status is set to KPROBE_REENTER when the error occurs. By teaching kprobe_breakpoint_ss_handler() to handle this status I can no longer reproduce the problem. Fixes: ba090f9cafd5 ("arm64: kprobes: Remove redundant kprobe_step_ctx") Signed-off-by: Qais Yousef <qais.yousef@arm.com> Acked-by: Will Deacon <will@kernel.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20210122110909.3324607-1-qais.yousef@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64: kprobes: Remove redundant kprobe_step_ctxMasami Hiramatsu
The kprobe_step_ctx (kcb->ss_ctx) has ss_pending and match_addr, but those are redundant because those can be replaced by KPROBE_HIT_SS and &cur_kprobe->ainsn.api.insn[1] respectively. To simplify the code, remove the kprobe_step_ctx. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201103134900.337243-2-jean-philippe@linaro.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-03arm64: kprobes: Use BRK instead of single-step when executing instructions ↵Jean-Philippe Brucker
out-of-line Commit 36dadef23fcc ("kprobes: Init kprobes in early_initcall") enabled using kprobes from early_initcall. Unfortunately at this point the hardware debug infrastructure is not operational. The OS lock may still be locked, and the hardware watchpoints may have unknown values when kprobe enables debug monitors to single-step instructions. Rather than using hardware single-step, append a BRK instruction after the instruction to be executed out-of-line. Fixes: 36dadef23fcc ("kprobes: Init kprobes in early_initcall") Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20201103134900.337243-1-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-08arm64: kprobes: Use generic kretprobe trampoline handlerMasami Hiramatsu
Use the generic kretprobe trampoline handler, and use kernel_stack_pointer(regs) for framepointer verification. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/159870603544.1229682.10309733593594205725.stgit@devnote2
2020-07-03vmalloc: fix the owner argument for the new __vmalloc_node_range callersChristoph Hellwig
Fix the recently added new __vmalloc_node_range callers to pass the correct values as the owner for display in /proc/vmallocinfo. Fixes: 800e26b81311 ("x86/hyperv: allocate the hypercall page with only read and execute bits") Fixes: 10d5e97c1bf8 ("arm64: use PAGE_KERNEL_ROX directly in alloc_insn_page") Fixes: 7a0e27b2a0ce ("mm: remove vmalloc_exec") Reported-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200627075649.2455097-1-hch@lst.de Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-26arm64: use PAGE_KERNEL_ROX directly in alloc_insn_pageChristoph Hellwig
Use PAGE_KERNEL_ROX directly instead of allocating RWX and setting the page read-only just after the allocation. Link: http://lkml.kernel.org/r/20200618064307.32739-3-hch@lst.de Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dexuan Cui <decui@microsoft.com> Cc: Jessica Yu <jeyu@kernel.org> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Wei Liu <wei.liu@kernel.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-28arm64: remove __exception annotationsJames Morse
Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded stack frame") arm64 has not used the __exception annotation to dump the pt_regs during stack tracing. in_exception_text() has no callers. This annotation is only used to blacklist kprobes, it means the same as __kprobes. Section annotations like this require the functions to be grouped together between the start/end markers, and placed according to the linker script. For kprobes we also have NOKPROBE_SYMBOL() which logs the symbol address in a section that kprobes parses and blacklists at boot. Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of blacklisted symbols, and saves us from having an arm64 specific spelling of __kprobes. do_debug_exception() already has a NOKPROBE_SYMBOL() annotation. Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-08-02arm64: kprobes: Recover pstate.D in single-step exception handlerMasami Hiramatsu
kprobes manipulates the interrupted PSTATE for single step, and doesn't restore it. Thus, if we put a kprobe where the pstate.D (debug) masked, the mask will be cleared after the kprobe hits. Moreover, in the most complicated case, this can lead a kernel crash with below message when a nested kprobe hits. [ 152.118921] Unexpected kernel single-step exception at EL1 When the 1st kprobe hits, do_debug_exception() will be called. At this point, debug exception (= pstate.D) must be masked (=1). But if another kprobes hits before single-step of the first kprobe (e.g. inside user pre_handler), it unmask the debug exception (pstate.D = 0) and return. Then, when the 1st kprobe setting up single-step, it saves current DAIF, mask DAIF, enable single-step, and restore DAIF. However, since "D" flag in DAIF is cleared by the 2nd kprobe, the single-step exception happens soon after restoring DAIF. This has been introduced by commit 7419333fa15e ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler") To solve this issue, this stores all DAIF bits and restore it after single stepping. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Fixes: 7419333fa15e ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler") Reviewed-by: James Morse <james.morse@arm.com> Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-06-24arm64/kprobes: set VM_FLUSH_RESET_PERMS on kprobe instruction pagesArd Biesheuvel
In order to avoid transient inconsistencies where freed code pages are remapped writable while stale TLB entries still exist on other cores, mark the kprobes text pages with the VM_FLUSH_RESET_PERMS attribute. This instructs the core vmalloc code not to defer the TLB flush when this region is unmapped and returned to the page allocator. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 174Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 655 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Richard Fontana <rfontana@redhat.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070034.575739538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-09arm64: probes: Move magic BRK values into brk-imm.hWill Deacon
kprobes and uprobes reserve some BRK immediates for installing their probes. Define these along with the other reservations in brk-imm.h and rename the ESR definitions to be consistent with the others that we already have. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-09arm64: debug: Remove redundant user_mode(regs) checks from debug handlersWill Deacon
Now that the debug hook dispatching code takes the triggering exception level into account, there's no need for the hooks themselves to poke around with user_mode(regs). Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-09arm64: kprobes: Avoid calling kprobes debug handlers explicitlyWill Deacon
Kprobes bypasses our debug hook registration code so that it doesn't get tangled up with recursive debug exceptions from things like lockdep: http://lists.infradead.org/pipermail/linux-arm-kernel/2015-February/324385.html However, since then, (a) the hook list has become RCU protected and (b) the kprobes hooks were found not to filter out exceptions from userspace correctly. On top of that, the step handler is invoked directly from single_step_handler(), which *does* use the debug hook list, so it's clearly not the end of the world. For now, have kprobes use the debug hook registration API like everybody else. We can revisit this in the future if this is found to limit coverage significantly. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-03-19arm64: kprobes: Use arch_populate_kprobe_blacklist()Masami Hiramatsu
Use arch_populate_kprobe_blacklist() instead of arch_within_kprobe_blacklist() so that we can see the full blacklisted symbols under the debugfs. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: Add arch_populate_kprobe_blacklist() comment] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-03-19arm64: kprobes: Move exception_text check in blacklistMasami Hiramatsu
Move exception/irqentry text address check in blacklist, since those are symbol based rejection. If we prohibit probing on the symbols in exception_text, those should be blacklisted. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-03-19arm64: kprobes: Remove unneeded RODATA checkMasami Hiramatsu
Remove unneeded RODATA check from arch_prepare_kprobe(). Since check_kprobe_address_safe() already ensured that the probe address is in kernel text, we don't need to check whether the address in RODATA or not. That must be always false. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-03-19arm64: kprobes: Move extable address check into arch_prepare_kprobe()Masami Hiramatsu
Move extable address check into arch_prepare_kprobe() from arch_within_kprobe_blacklist(). The blacklist is exposed via debugfs as a list of symbols. The extable entries are smaller, so must be filtered out by arch_prepare_kprobe(). Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-03-10Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - Pseudo NMI support for arm64 using GICv3 interrupt priorities - uaccess macros clean-up (unsafe user accessors also merged but reverted, waiting for objtool support on arm64) - ptrace regsets for Pointer Authentication (ARMv8.3) key management - inX() ordering w.r.t. delay() on arm64 and riscv (acks in place by the riscv maintainers) - arm64/perf updates: PMU bindings converted to json-schema, unused variable and misleading comment removed - arm64/debug fixes to ensure checking of the triggering exception level and to avoid the propagation of the UNKNOWN FAR value into the si_code for debug signals - Workaround for Fujitsu A64FX erratum 010001 - lib/raid6 ARM NEON optimisations - NR_CPUS now defaults to 256 on arm64 - Minor clean-ups (documentation/comments, Kconfig warning, unused asm-offsets, clang warnings) - MAINTAINERS update for list information to the ARM64 ACPI entry * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (54 commits) arm64: mmu: drop paging_init comments arm64: debug: Ensure debug handlers check triggering exception level arm64: debug: Don't propagate UNKNOWN FAR into si_code for debug signals Revert "arm64: uaccess: Implement unsafe accessors" arm64: avoid clang warning about self-assignment arm64: Kconfig.platforms: fix warning unmet direct dependencies lib/raid6: arm: optimize away a mask operation in NEON recovery routine lib/raid6: use vdupq_n_u8 to avoid endianness warnings arm64: io: Hook up __io_par() for inX() ordering riscv: io: Update __io_[p]ar() macros to take an argument asm-generic/io: Pass result of I/O accessor to __io_[p]ar() arm64: Add workaround for Fujitsu A64FX erratum 010001 arm64: Rename get_thread_info() arm64: Remove documentation about TIF_USEDFPU arm64: irqflags: Fix clang build warnings arm64: Enable the support of pseudo-NMIs arm64: Skip irqflags tracing for NMI in IRQs disabled context arm64: Skip preemption when exiting an NMI arm64: Handle serror in NMI context irqchip/gic-v3: Allow interrupts to be set as pseudo-NMI ...
2019-03-01arm64: debug: Ensure debug handlers check triggering exception levelWill Deacon
Debug exception handlers may be called for exceptions generated both by user and kernel code. In many cases, this is checked explicitly, but in other cases things either happen to work by happy accident or they go slightly wrong. For example, executing 'brk #4' from userspace will enter the kprobes code and be ignored, but the instruction will be retried forever in userspace instead of delivering a SIGTRAP. Fix this issue in the most stable-friendly fashion by simply adding explicit checks of the triggering exception level to all of our debug exception handlers. Cc: <stable@vger.kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-02-01arm64: kprobe: Always blacklist the KVM world-switch codeJames Morse
On systems with VHE the kernel and KVM's world-switch code run at the same exception level. Code that is only used on a VHE system does not need to be annotated as __hyp_text as it can reside anywhere in the kernel text. __hyp_text was also used to prevent kprobes from patching breakpoint instructions into this region, as this code runs at a different exception level. While this is no longer true with VHE, KVM still switches VBAR_EL1, meaning a kprobe's breakpoint executed in the world-switch code will cause a hyp-panic. Move the __hyp_text check in the kprobes blacklist so it applies on VHE systems too, to cover the common code and guest enter/exit assembly. Fixes: 888b3c8720e0 ("arm64: Treat all entry code as non-kprobe-able") Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-11-02arm64: kprobe: make page to RO mode when allocate itAnders Roxell
Commit 1404d6f13e47 ("arm64: dump: Add checking for writable and exectuable pages") has successfully identified code that leaves a page with W+X permissions. [ 3.245140] arm64/mm: Found insecure W+X mapping at address (____ptrval____)/0xffff000000d90000 [ 3.245771] WARNING: CPU: 0 PID: 1 at ../arch/arm64/mm/dump.c:232 note_page+0x410/0x420 [ 3.246141] Modules linked in: [ 3.246653] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.0-rc5-next-20180928-00001-ge70ae259b853-dirty #62 [ 3.247008] Hardware name: linux,dummy-virt (DT) [ 3.247347] pstate: 80000005 (Nzcv daif -PAN -UAO) [ 3.247623] pc : note_page+0x410/0x420 [ 3.247898] lr : note_page+0x410/0x420 [ 3.248071] sp : ffff00000804bcd0 [ 3.248254] x29: ffff00000804bcd0 x28: ffff000009274000 [ 3.248578] x27: ffff00000921a000 x26: ffff80007dfff000 [ 3.248845] x25: ffff0000093f5000 x24: ffff000009526f6a [ 3.249109] x23: 0000000000000004 x22: ffff000000d91000 [ 3.249396] x21: ffff000000d90000 x20: 0000000000000000 [ 3.249661] x19: ffff00000804bde8 x18: 0000000000000400 [ 3.249924] x17: 0000000000000000 x16: 0000000000000000 [ 3.250271] x15: ffffffffffffffff x14: 295f5f5f5f6c6176 [ 3.250594] x13: 7274705f5f5f5f28 x12: 2073736572646461 [ 3.250941] x11: 20746120676e6970 x10: 70616d20582b5720 [ 3.251252] x9 : 6572756365736e69 x8 : 3039643030303030 [ 3.251519] x7 : 306666666678302f x6 : ffff0000095467b2 [ 3.251802] x5 : 0000000000000000 x4 : 0000000000000000 [ 3.252060] x3 : 0000000000000000 x2 : ffffffffffffffff [ 3.252323] x1 : 4d151327adc50b00 x0 : 0000000000000000 [ 3.252664] Call trace: [ 3.252953] note_page+0x410/0x420 [ 3.253186] walk_pgd+0x12c/0x238 [ 3.253417] ptdump_check_wx+0x68/0xf8 [ 3.253637] mark_rodata_ro+0x68/0x98 [ 3.253847] kernel_init+0x38/0x160 [ 3.254103] ret_from_fork+0x10/0x18 kprobes allocates a writable executable page with module_alloc() in order to store executable code. Reworked to that when allocate a page it sets mode RO. Inspired by commit 63fef14fc98a ("kprobes/x86: Make insn buffer always ROX and use text_poke()"). Suggested-by: Arnd Bergmann <arnd@arndb.de> Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> [catalin.marinas@arm.com: removed unnecessary casts] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-10-01arm64/kprobes: remove an extra semicolon in arch_prepare_kprobezhong jiang
There is an extra semicolon in arch_prepare_kprobe, remove it. Signed-off-by: zhong jiang <zhongjiang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-06-21kprobes/arm64: Fix %p uses in error messagesMasami Hiramatsu
Fix %p uses in error messages by removing it because those are redundant or meaningless. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Howells <dhowells@redhat.com> Cc: David S . Miller <davem@davemloft.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Jon Medhurst <tixy@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Tobin C . Harding <me@tobin.cc> Cc: acme@kernel.org Cc: akpm@linux-foundation.org Cc: brueckner@linux.vnet.ibm.com Cc: linux-arch@vger.kernel.org Cc: rostedt@goodmis.org Cc: schwidefsky@de.ibm.com Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/lkml/152491908405.9916.12425053035317241111.stgit@devbox Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-06-21bpf/error-inject/kprobes: Clear current_kprobe and enable preempt in kprobeMasami Hiramatsu
Clear current_kprobe and enable preemption in kprobe even if pre_handler returns !0. This simplifies function override using kprobes. Jprobe used to require to keep the preemption disabled and keep current_kprobe until it returned to original function entry. For this reason kprobe_int3_handler() and similar arch dependent kprobe handers checks pre_handler result and exit without enabling preemption if the result is !0. After removing the jprobe, Kprobes does not need to keep preempt disabled even if user handler returns !0 anymore. But since the function override handler in error-inject and bpf is also returns !0 if it overrides a function, to balancing the preempt count, it enables preemption and reset current kprobe by itself. That is a bad design that is very buggy. This fixes such unbalanced preempt-count and current_kprobes setting in kprobes, bpf and error-inject. Note: for powerpc and x86, this removes all preempt_disable from kprobe_ftrace_handler because ftrace callbacks are called under preempt disabled. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <jhogan@kernel.org> Cc: Josef Bacik <jbacik@fb.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-ia64@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: linux-snps-arc@lists.infradead.org Cc: linuxppc-dev@lists.ozlabs.org Cc: sparclinux@vger.kernel.org Link: https://lore.kernel.org/lkml/152942494574.15209.12323837825873032258.stgit@devbox Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-06-21arm64/kprobes: Don't call the ->break_handler() in arm64 kprobes codeMasami Hiramatsu
Don't call the ->break_handler() from the arm64 kprobes code, because it was only used by jprobes which got removed. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/lkml/152942474231.15209.17684808374429473004.stgit@devbox Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-06-21arm64/kprobes: Remove jprobe implementationMasami Hiramatsu
Remove arch dependent setjump/longjump functions and unused fields in kprobe_ctlblk for jprobes from arch/arm64. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/lkml/152942442318.15209.17767976282305601884.stgit@devbox Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-05-30arm64: Call __show_regs directlyKefeng Wang
Generic code expects show_regs() to also dump the stack, but arm64's show_reg() does not do this. Some arm64 callers of show_regs() *only* want the registers dumped, without the stack. To enable generic code to work as expected, we need to make show_regs() dump the stack. Where we only want the registers dumped, we must use __show_regs(). This patch updates code to use __show_regs() where only registers are desired. A subsequent patch will modify show_regs(). Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-03-10arm64: kprobes: remove kprobe_exceptions_notifyNaveen N. Rao
Commit fc62d0207ae0 ("kprobes: Introduce weak variant of kprobe_exceptions_notify()") introduces a generic empty version of the function for architectures that don't need special handling, like arm64. As such, remove the arch/arm64/ specific handler. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to ↵Ingo Molnar
<linux/sched/debug.h> We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/debug.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-12-24Replace <asm/uaccess.h> with <linux/uaccess.h> globallyLinus Torvalds
This was entirely automated, using the script by Al: PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>' sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \ $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h) to do the replacement at the end of the merge window. Requested-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-11-07arm64: kprobe: protect/rename few definitions to be reused by uprobePratyush Anand
decode-insn code has to be reused by arm64 uprobe implementation as well. Therefore, this patch protects some portion of kprobe code and renames few other, so that decode-insn functionality can be reused by uprobe even when CONFIG_KPROBES is not defined. kprobe_opcode_t and struct arch_specific_insn are also defined by linux/kprobes.h, when CONFIG_KPROBES is not defined. So, protect these definitions in asm/probes.h. linux/kprobes.h already includes asm/kprobes.h. Therefore, remove inclusion of asm/kprobes.h from decode-insn.c. There are some definitions like kprobe_insn and kprobes_handler_t etc can be re-used by uprobe. So, it would be better to remove 'k' from their names. struct arch_specific_insn is specific to kprobe. Therefore, introduce a new struct arch_probe_insn which will be common for both kprobe and uprobe, so that decode-insn code can be shared. Modify kprobe code accordingly. Function arm_probe_decode_insn() will be needed by uprobe as well. So make it global. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-09-20arm64: migrate exception table users off module.h and onto extable.hPaul Gortmaker
These files were only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile these files. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-08-25arm64: Create sections.hJames Morse
Each time new section markers are added, kernel/vmlinux.ld.S is updated, and new extern char __start_foo[] definitions are scattered through the tree. Create asm/include/sections.h to collect these definitions (and include the existing asm-generic version). Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-08-25arm64: kprobe: Always clear pstate.D in breakpoint exception handlerPratyush Anand
Whenever we are hitting a kprobe from a none-kprobe debug exception handler, we hit an infinite occurrences of "Unexpected kernel single-step exception at EL1" PSTATE.D is debug exception mask bit. It is set whenever we enter into an exception mode. When it is set then Watchpoint, Breakpoint, and Software Step exceptions are masked. However, software Breakpoint Instruction exceptions can never be masked. Therefore, if we ever execute a BRK instruction, irrespective of D-bit setting, we will be receiving a corresponding breakpoint exception. For example: - We are executing kprobe pre/post handler, and kprobe has been inserted in one of the instruction of a function called by handler. So, it executes BRK instruction and we land into the case of KPROBE_REENTER. (This case is already handled by current code) - We are executing uprobe handler or any other BRK handler such as in WARN_ON (BRK BUG_BRK_IMM), and we trace that path using kprobe.So, we enter into kprobe breakpoint handler,from another BRK handler.(This case is not being handled currently) In all such cases kprobe breakpoint exception will be raised when we were already in debug exception mode. SPSR's D bit (bit 9) shows the value of PSTATE.D immediately before the exception was taken. So, in above example cases we would find it set in kprobe breakpoint handler. Single step exception will always be followed by a kprobe breakpoint exception.However, it will only be raised gracefully if we clear D bit while returning from breakpoint exception. If D bit is set then, it results into undefined exception and when it's handler enables dbg then single step exception is generated, however it will never be handled(because address does not match and therefore treated as unexpected). This patch clears D-flag unconditionally in setup_singlestep, so that we can always get single step exception correctly after returning from breakpoint exception. Additionally, it also removes D-flag set statement for KPROBE_REENTER return path, because debug exception for KPROBE_REENTER will always take place in a debug exception state. So, D-flag will already be set in this case. Acked-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>