aboutsummaryrefslogtreecommitdiffstats
path: root/kernel
AgeCommit message (Collapse)Author
8 daysMerge branch 'v6.1/standard/base' into v6.1/standard/preempt-rt/basev6.1/standard/preempt-rt/baseBruce Ashfield
8 daysMerge tag 'v6.1.130' into v6.1/standard/basev6.1/standard/tiny/x86v6.1/standard/tiny/common-pcv6.1/standard/tiny/basev6.1/standard/ti-am335xv6.1/standard/qemuarm64v6.1/standard/nxp-ls20xxv6.1/standard/edgerouterv6.1/standard/beaglebonev6.1/standard/baseBruce Ashfield
This is the 6.1.130 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmfLGVwACgkQONu9yGCS # aT7pYxAAkjDbMREGCPul1ya/Vs74bIupGOZl3XyB9uEbr9lFv8v5m0q1vMMuDV40 # kyrX+9oQSYmXzzhc7w3nlXQ9gf1tWRuaDJmO7eAMucl6/3S36KecslK4aOOl59BC # QIW0S0K2OP65DvY4KNyOcVvhNEXcbH3npH0tLDKH12HqxKu3E4wVEyYNlrewOo8h # Sshs1CDEIDfyPPWuqjIVt2qAgaTqkIdDgHIWPnWhdBHiN8/33mv0bYo5YkvbFMbc # +1jZKNbGzFwingLI4dHoF/8rg/b7MbB3EU2fIiVgn89EkLHdlcEhrIGPCPldeYoq # 6GEOSKLyDrXijyNkRJKo3fLDjRsoE+aU4I9km0y1oHmj0itvumnlT1P5Wnj9sBRU # b/83bvSvQwy+c8AE3DGkEluEZQFchohPn+RNG4wL4wWC73fZd8QOKzY+R79wJdcB # 65wLIh5P0GLsed2g+06YNnbsGeGiJdvYoPKwLRHWP8p5vm83Tflhj2uuVc9irBFc # flnMCGPiXPTq6i0P4NcjkdYM8cRLuOVEOWq3ABuX5AXDn5jCRl5a90Ev3csAR9n8 # s+Y4U7hTs3xxuYlQXkH1SRcHEQE6Wk4+ntwvIeAtWqlxRxQtHFgDz6e1mm5OFwB8 # L8jgaw4SQ9gMdryd7b8H1QusMhLe5/eBVrm5WA7gE3Qfix6DEJs= # =hlpM # -----END PGP SIGNATURE----- # gpg: Signature made Fri 07 Mar 2025 11:05:48 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
11 dayssched/core: Prevent rescheduling when interrupts are disabledThomas Gleixner
commit 82c387ef7568c0d96a918a5a78d9cad6256cfa15 upstream. David reported a warning observed while loop testing kexec jump: Interrupts enabled after irqrouter_resume+0x0/0x50 WARNING: CPU: 0 PID: 560 at drivers/base/syscore.c:103 syscore_resume+0x18a/0x220 kernel_kexec+0xf6/0x180 __do_sys_reboot+0x206/0x250 do_syscall_64+0x95/0x180 The corresponding interrupt flag trace: hardirqs last enabled at (15573): [<ffffffffa8281b8e>] __up_console_sem+0x7e/0x90 hardirqs last disabled at (15580): [<ffffffffa8281b73>] __up_console_sem+0x63/0x90 That means __up_console_sem() was invoked with interrupts enabled. Further instrumentation revealed that in the interrupt disabled section of kexec jump one of the syscore_suspend() callbacks woke up a task, which set the NEED_RESCHED flag. A later callback in the resume path invoked cond_resched() which in turn led to the invocation of the scheduler: __cond_resched+0x21/0x60 down_timeout+0x18/0x60 acpi_os_wait_semaphore+0x4c/0x80 acpi_ut_acquire_mutex+0x3d/0x100 acpi_ns_get_node+0x27/0x60 acpi_ns_evaluate+0x1cb/0x2d0 acpi_rs_set_srs_method_data+0x156/0x190 acpi_pci_link_set+0x11c/0x290 irqrouter_resume+0x54/0x60 syscore_resume+0x6a/0x200 kernel_kexec+0x145/0x1c0 __do_sys_reboot+0xeb/0x240 do_syscall_64+0x95/0x180 This is a long standing problem, which probably got more visible with the recent printk changes. Something does a task wakeup and the scheduler sets the NEED_RESCHED flag. cond_resched() sees it set and invokes schedule() from a completely bogus context. The scheduler enables interrupts after context switching, which causes the above warning at the end. Quite some of the code paths in syscore_suspend()/resume() can result in triggering a wakeup with the exactly same consequences. They might not have done so yet, but as they share a lot of code with normal operations it's just a question of time. The problem only affects the PREEMPT_NONE and PREEMPT_VOLUNTARY scheduling models. Full preemption is not affected as cond_resched() is disabled and the preemption check preemptible() takes the interrupt disabled flag into account. Cure the problem by adding a corresponding check into cond_resched(). Reported-by: David Woodhouse <dwmw@amazon.co.uk> Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: David Woodhouse <dwmw@amazon.co.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: stable@vger.kernel.org Closes: https://lore.kernel.org/all/7717fe2ac0ce5f0a2c43fdab8b11f4483d54a2a4.camel@infradead.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 daysperf/core: Fix low freq setting via IOC_PERIODKan Liang
commit 0d39844150546fa1415127c5fbae26db64070dd3 upstream. A low attr::freq value cannot be set via IOC_PERIOD on some platforms. The perf_event_check_period() introduced in: 81ec3f3c4c4d ("perf/x86: Add check_period PMU callback") was intended to check the period, rather than the frequency. A low frequency may be mistakenly rejected by limit_period(). Fix it. Fixes: 81ec3f3c4c4d ("perf/x86: Add check_period PMU callback") Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250117151913.3043942-2-kan.liang@linux.intel.com Closes: https://lore.kernel.org/lkml/20250115154949.3147-1-ravi.bangoria@amd.com/ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 daysftrace: Avoid potential division by zero in function_stat_show()Nikolay Kuratov
commit a1a7eb89ca0b89dc1c326eeee2596f263291aca3 upstream. Check whether denominator expression x * (x - 1) * 1000 mod {2^32, 2^64} produce zero and skip stddev computation in that case. For now don't care about rec->counter * rec->counter overflow because rec->time * rec->time overflow will likely happen earlier. Cc: stable@vger.kernel.org Cc: Wen Yang <wenyang@linux.alibaba.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/20250206090156.1561783-1-kniv@yandex-team.ru Fixes: e31f7939c1c27 ("ftrace: Avoid potential division by zero in function profiler") Signed-off-by: Nikolay Kuratov <kniv@yandex-team.ru> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 daystracing: Fix bad hist from corrupting named_triggers listSteven Rostedt
commit 6f86bdeab633a56d5c6dccf1a2c5989b6a5e323e upstream. The following commands causes a crash: ~# cd /sys/kernel/tracing/events/rcu/rcu_callback ~# echo 'hist:name=bad:keys=common_pid:onmax(bogus).save(common_pid)' > trigger bash: echo: write error: Invalid argument ~# echo 'hist:name=bad:keys=common_pid' > trigger Because the following occurs: event_trigger_write() { trigger_process_regex() { event_hist_trigger_parse() { data = event_trigger_alloc(..); event_trigger_register(.., data) { cmd_ops->reg(.., data, ..) [hist_register_trigger()] { data->ops->init() [event_hist_trigger_init()] { save_named_trigger(name, data) { list_add(&data->named_list, &named_triggers); } } } } ret = create_actions(); (return -EINVAL) if (ret) goto out_unreg; [..] ret = hist_trigger_enable(data, ...) { list_add_tail_rcu(&data->list, &file->triggers); <<<---- SKIPPED!!! (this is important!) [..] out_unreg: event_hist_unregister(.., data) { cmd_ops->unreg(.., data, ..) [hist_unregister_trigger()] { list_for_each_entry(iter, &file->triggers, list) { if (!hist_trigger_match(data, iter, named_data, false)) <- never matches continue; [..] test = iter; } if (test && test->ops->free) <<<-- test is NULL test->ops->free(test) [event_hist_trigger_free()] { [..] if (data->name) del_named_trigger(data) { list_del(&data->named_list); <<<<-- NEVER gets removed! } } } } [..] kfree(data); <<<-- frees item but it is still on list The next time a hist with name is registered, it causes an u-a-f bug and the kernel can crash. Move the code around such that if event_trigger_register() succeeds, the next thing called is hist_trigger_enable() which adds it to the list. A bunch of actions is called if get_named_trigger_data() returns false. But that doesn't need to be called after event_trigger_register(), so it can be moved up, allowing event_trigger_register() to be called just before hist_trigger_enable() keeping them together and allowing the file->triggers to be properly populated. Cc: stable@vger.kernel.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/20250227163944.1c37f85f@gandalf.local.home Fixes: 067fe038e70f6 ("tracing: Add variable reference handling to hist triggers") Reported-by: Tomas Glozar <tglozar@redhat.com> Tested-by: Tomas Glozar <tglozar@redhat.com> Reviewed-by: Tom Zanussi <zanussi@kernel.org> Closes: https://lore.kernel.org/all/CAP4=nvTsxjckSBTz=Oe_UYh8keD9_sZC4i++4h72mJLic4_W4A@mail.gmail.com/ Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 daysuprobes: Reject the shared zeropage in uprobe_write_opcode()Tong Tiangen
[ Upstream commit bddf10d26e6e5114e7415a0e442ec6f51a559468 ] We triggered the following crash in syzkaller tests: BUG: Bad page state in process syz.7.38 pfn:1eff3 page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1eff3 flags: 0x3fffff00004004(referenced|reserved|node=0|zone=1|lastcpupid=0x1fffff) raw: 003fffff00004004 ffffe6c6c07bfcc8 ffffe6c6c07bfcc8 0000000000000000 raw: 0000000000000000 0000000000000000 00000000fffffffe 0000000000000000 page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x32/0x50 bad_page+0x69/0xf0 free_unref_page_prepare+0x401/0x500 free_unref_page+0x6d/0x1b0 uprobe_write_opcode+0x460/0x8e0 install_breakpoint.part.0+0x51/0x80 register_for_each_vma+0x1d9/0x2b0 __uprobe_register+0x245/0x300 bpf_uprobe_multi_link_attach+0x29b/0x4f0 link_create+0x1e2/0x280 __sys_bpf+0x75f/0xac0 __x64_sys_bpf+0x1a/0x30 do_syscall_64+0x56/0x100 entry_SYSCALL_64_after_hwframe+0x78/0xe2 BUG: Bad rss-counter state mm:00000000452453e0 type:MM_FILEPAGES val:-1 The following syzkaller test case can be used to reproduce: r2 = creat(&(0x7f0000000000)='./file0\x00', 0x8) write$nbd(r2, &(0x7f0000000580)=ANY=[], 0x10) r4 = openat(0xffffffffffffff9c, &(0x7f0000000040)='./file0\x00', 0x42, 0x0) mmap$IORING_OFF_SQ_RING(&(0x7f0000ffd000/0x3000)=nil, 0x3000, 0x0, 0x12, r4, 0x0) r5 = userfaultfd(0x80801) ioctl$UFFDIO_API(r5, 0xc018aa3f, &(0x7f0000000040)={0xaa, 0x20}) r6 = userfaultfd(0x80801) ioctl$UFFDIO_API(r6, 0xc018aa3f, &(0x7f0000000140)) ioctl$UFFDIO_REGISTER(r6, 0xc020aa00, &(0x7f0000000100)={{&(0x7f0000ffc000/0x4000)=nil, 0x4000}, 0x2}) ioctl$UFFDIO_ZEROPAGE(r5, 0xc020aa04, &(0x7f0000000000)={{&(0x7f0000ffd000/0x1000)=nil, 0x1000}}) r7 = bpf$PROG_LOAD(0x5, &(0x7f0000000140)={0x2, 0x3, &(0x7f0000000200)=ANY=[@ANYBLOB="1800000000120000000000000000000095"], &(0x7f0000000000)='GPL\x00', 0x7, 0x0, 0x0, 0x0, 0x0, '\x00', 0x0, @fallback=0x30, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x10, 0x0, @void, @value}, 0x94) bpf$BPF_LINK_CREATE_XDP(0x1c, &(0x7f0000000040)={r7, 0x0, 0x30, 0x1e, @val=@uprobe_multi={&(0x7f0000000080)='./file0\x00', &(0x7f0000000100)=[0x2], 0x0, 0x0, 0x1}}, 0x40) The cause is that zero pfn is set to the PTE without increasing the RSS count in mfill_atomic_pte_zeropage() and the refcount of zero folio does not increase accordingly. Then, the operation on the same pfn is performed in uprobe_write_opcode()->__replace_page() to unconditional decrease the RSS count and old_folio's refcount. Therefore, two bugs are introduced: 1. The RSS count is incorrect, when process exit, the check_mm() report error "Bad rss-count". 2. The reserved folio (zero folio) is freed when folio->refcount is zero, then free_pages_prepare->free_page_is_bad() report error "Bad page state". There is more, the following warning could also theoretically be triggered: __replace_page() -> ... -> folio_remove_rmap_pte() -> VM_WARN_ON_FOLIO(is_zero_folio(folio), folio) Considering that uprobe hit on the zero folio is a very rare case, just reject zero old folio immediately after get_user_page_vma_remote(). [ mingo: Cleaned up the changelog ] Fixes: 7396fa818d62 ("uprobes/core: Make background page replacement logic account for rss_stat counters") Fixes: 2b1444983508 ("uprobes, mm, x86: Add the ability to install and remove uprobes breakpoints") Signed-off-by: Tong Tiangen <tongtiangen@huawei.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20250224031149.1598949-1-tongtiangen@huawei.com Signed-off-by: Sasha Levin <sashal@kernel.org>
11 daysftrace: Do not add duplicate entries in subops manager opsSteven Rostedt
commit 8eb4b09e0bbd30981305643229fe7640ad41b667 upstream. Check if a function is already in the manager ops of a subops. A manager ops contains multiple subops, and if two or more subops are tracing the same function, the manager ops only needs a single entry in its hash. Cc: stable@vger.kernel.org Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Link: https://lore.kernel.org/20250220202055.226762894@goodmis.org Fixes: 4f554e955614f ("ftrace: Add ftrace_set_filter_ips function") Tested-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 daysftrace: Correct preemption accounting for function tracing.Sebastian Andrzej Siewior
commit 57b76bedc5c52c66968183b5ef57234894c25ce7 upstream. The function tracer should record the preemption level at the point when the function is invoked. If the tracing subsystem decrement the preemption counter it needs to correct this before feeding the data into the trace buffer. This was broken in the commit cited below while shifting the preempt-disabled section. Use tracing_gen_ctx_dec() which properly subtracts one from the preemption counter on a preemptible kernel. Cc: stable@vger.kernel.org Cc: Wander Lairson Costa <wander@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/20250220140749.pfw8qoNZ@linutronix.de Fixes: ce5e48036c9e7 ("ftrace: disable preemption when recursion locked") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: Wander Lairson Costa <wander@redhat.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 daysacct: block access to kernel internal filesystemsChristian Brauner
commit 890ed45bde808c422c3c27d3285fc45affa0f930 upstream. There's no point in allowing anything kernel internal nor procfs or sysfs. Link: https://lore.kernel.org/r/20250127091811.3183623-1-quzicheng@huawei.com Link: https://lore.kernel.org/r/20250211-work-acct-v1-2-1c16aecab8b3@kernel.org Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reviewed-by: Amir Goldstein <amir73il@gmail.com> Reported-by: Zicheng Qu <quzicheng@huawei.com> Cc: stable@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 daysacct: perform last write from workqueueChristian Brauner
commit 56d5f3eba3f5de0efdd556de4ef381e109b973a9 upstream. In [1] it was reported that the acct(2) system call can be used to trigger NULL deref in cases where it is set to write to a file that triggers an internal lookup. This can e.g., happen when pointing acc(2) to /sys/power/resume. At the point the where the write to this file happens the calling task has already exited and called exit_fs(). A lookup will thus trigger a NULL-deref when accessing current->fs. Reorganize the code so that the the final write happens from the workqueue but with the caller's credentials. This preserves the (strange) permission model and has almost no regression risk. This api should stop to exist though. Link: https://lore.kernel.org/r/20250127091811.3183623-1-quzicheng@huawei.com [1] Link: https://lore.kernel.org/r/20250211-work-acct-v1-1-1c16aecab8b3@kernel.org Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: Zicheng Qu <quzicheng@huawei.com> Cc: stable@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 daysbpf: skip non exist keys in generic_map_lookup_batchYan Zhai
[ Upstream commit 5644c6b50ffee0a56c1e01430a8c88e34decb120 ] The generic_map_lookup_batch currently returns EINTR if it fails with ENOENT and retries several times on bpf_map_copy_value. The next batch would start from the same location, presuming it's a transient issue. This is incorrect if a map can actually have "holes", i.e. "get_next_key" can return a key that does not point to a valid value. At least the array of maps type may contain such holes legitly. Right now these holes show up, generic batch lookup cannot proceed any more. It will always fail with EINTR errors. Rather, do not retry in generic_map_lookup_batch. If it finds a non existing element, skip to the next key. This simple solution comes with a price that transient errors may not be recovered, and the iteration might cycle back to the first key under parallel deletion. For example, Hou Tao <houtao@huaweicloud.com> pointed out a following scenario: For LPM trie map: (1) ->map_get_next_key(map, prev_key, key) returns a valid key (2) bpf_map_copy_value() return -ENOMENT It means the key must be deleted concurrently. (3) goto next_key It swaps the prev_key and key (4) ->map_get_next_key(map, prev_key, key) again prev_key points to a non-existing key, for LPM trie it will treat just like prev_key=NULL case, the returned key will be duplicated. With the retry logic, the iteration can continue to the key next to the deleted one. But if we directly skip to the next key, the iteration loop would restart from the first key for the lpm_trie type. However, not all races may be recovered. For example, if current key is deleted after instead of before bpf_map_copy_value, or if the prev_key also gets deleted, then the loop will still restart from the first key for lpm_tire anyway. For generic lookup it might be better to stay simple, i.e. just skip to the next key. To guarantee that the output keys are not duplicated, it is better to implement map type specific batch operations, which can properly lock the trie and synchronize with concurrent mutators. Fixes: cb4d03ab499d ("bpf: Add generic support for lookup batch op") Closes: https://lore.kernel.org/bpf/Z6JXtA1M5jAZx8xD@debian.debian/ Signed-off-by: Yan Zhai <yan@cloudflare.com> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/85618439eea75930630685c467ccefeac0942e2b.1739171594.git.yan@cloudflare.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-28Merge branch 'v6.1/standard/base' into v6.1/standard/preempt-rt/baseBruce Ashfield
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com> # Conflicts: # drivers/tty/serial/8250/8250_port.c
2025-02-28Merge tag 'v6.1.129' into v6.1/standard/baseBruce Ashfield
This is the 6.1.129 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAme4d3oACgkQONu9yGCS # aT4Z+w//RforIku7dJNxV1VIFJUnG0d66sVeggkWpbjeYWej4dUQxpbS9l4YUxwG # OTa21GTDSxyue7P4gW3QpAaJwIafO0L+CoRfGdtSPU5pgbi4ylE8DzAoEzVWpRuA # /S+gwDP0Y9KCJaRG6zQlK3XrhJRyxukU77OeUjuf+eIw1/RBVZRaSP32fhK2FDWx # Fj2PhjqMJHIlWfv4ZlE986yIsWI2Al5Iqvf9AHtHJCXOrlF5IzEm7m60L93Txp33 # 31PQAtQxzyhn3Y1OlqdFDA1oy4yXzTQxtb7Vt/B6kOvTiWFJ3ItDxpUiKq10AGvr # 8iHBPotKX98cth2nEtIzyOLKkzBq7t72aTj3QEdC/u/zOD1LiN7HFHRwlZmAo/O2 # 7yafNm42E84V6MwaTDMUpqQs0S+ls3rklCobGf8K15Mxg6dt6OMU1I7qjvMqNUbQ # w9vXZl4Aiuw0LE75RuSzC4K6/nk5jpPodj9ZXHf1qS2OuzIl+eLHdbV8iPOxh8EB # QYnQvvPkZhLdVo/Xb2KGC73JWV+uKXOrB/0NWcYZMXw/lYARfEs3kAVilaWUm7bX # /JGNRh4cPzME6IK0HqAYSw0hxRPlv5mmNxhgwF4AhMC4mLBSoTYI2ZS2Vn8OxzfM # jnzA/ciH02gwoDE1HkdOdmeyXeZEViFg98/Aoxg13D9X67gp5Vg= # =bZty # -----END PGP SIGNATURE----- # gpg: Signature made Fri 21 Feb 2025 07:54:18 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2025-02-21kdb: Do not assume write() callback availableJohn Ogness
commit 6d3e0d8cc63221dec670d0ee92ac57961581e975 upstream. It is allowed for consoles to not provide a write() callback. For example ttynull does this. Check if a write() callback is available before using it. Signed-off-by: John Ogness <john.ogness@linutronix.de> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Daniel Thompson <daniel.thompson@linaro.org> Acked-by: Daniel Thompson <daniel.thompson@linaro.org> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20230717194607.145135-2-john.ogness@linutronix.de Cc: Brian Norris <briannorris@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-21clocksource: Use migrate_disable() to avoid calling get_random_u32() in ↵Waiman Long
atomic context [ Upstream commit 6bb05a33337b2c842373857b63de5c9bf1ae2a09 ] The following bug report happened with a PREEMPT_RT kernel: BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2012, name: kwatchdog preempt_count: 1, expected: 0 RCU nest depth: 0, expected: 0 get_random_u32+0x4f/0x110 clocksource_verify_choose_cpus+0xab/0x1a0 clocksource_verify_percpu.part.0+0x6b/0x330 clocksource_watchdog_kthread+0x193/0x1a0 It is due to the fact that clocksource_verify_choose_cpus() is invoked with preemption disabled. This function invokes get_random_u32() to obtain random numbers for choosing CPUs. The batched_entropy_32 local lock and/or the base_crng.lock spinlock in driver/char/random.c will be acquired during the call. In PREEMPT_RT kernel, they are both sleeping locks and so cannot be acquired in atomic context. Fix this problem by using migrate_disable() to allow smp_processor_id() to be reliably used without introducing atomic context. preempt_disable() is then called after clocksource_verify_choose_cpus() but before the clocksource measurement is being run to avoid introducing unexpected latency. Fixes: 7560c02bdffb ("clocksource: Check per-CPU clock synchronization when marked unstable") Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lore.kernel.org/all/20250131173323.891943-2-longman@redhat.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21clocksource: Use pr_info() for "Checking clocksource synchronization" messageWaiman Long
[ Upstream commit 1f566840a82982141f94086061927a90e79440e5 ] The "Checking clocksource synchronization" message is normally printed when clocksource_verify_percpu() is called for a given clocksource if both the CLOCK_SOURCE_UNSTABLE and CLOCK_SOURCE_VERIFY_PERCPU flags are set. It is an informational message and so pr_info() is the correct choice. Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> Acked-by: John Stultz <jstultz@google.com> Link: https://lore.kernel.org/all/20250125015442.3740588-1-longman@redhat.com Stable-dep-of: 6bb05a33337b ("clocksource: Use migrate_disable() to avoid calling get_random_u32() in atomic context") Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21cgroup: fix race between fork and cgroup.killShakeel Butt
commit b69bb476dee99d564d65d418e9a20acca6f32c3f upstream. Tejun reported the following race between fork() and cgroup.kill at [1]. Tejun: I was looking at cgroup.kill implementation and wondering whether there could be a race window. So, __cgroup_kill() does the following: k1. Set CGRP_KILL. k2. Iterate tasks and deliver SIGKILL. k3. Clear CGRP_KILL. The copy_process() does the following: c1. Copy a bunch of stuff. c2. Grab siglock. c3. Check fatal_signal_pending(). c4. Commit to forking. c5. Release siglock. c6. Call cgroup_post_fork() which puts the task on the css_set and tests CGRP_KILL. The intention seems to be that either a forking task gets SIGKILL and terminates on c3 or it sees CGRP_KILL on c6 and kills the child. However, I don't see what guarantees that k3 can't happen before c6. ie. After a forking task passes c5, k2 can take place and then before the forking task reaches c6, k3 can happen. Then, nobody would send SIGKILL to the child. What am I missing? This is indeed a race. One way to fix this race is by taking cgroup_threadgroup_rwsem in write mode in __cgroup_kill() as the fork() side takes cgroup_threadgroup_rwsem in read mode from cgroup_can_fork() to cgroup_post_fork(). However that would be heavy handed as this adds one more potential stall scenario for cgroup.kill which is usually called under extreme situation like memory pressure. To fix this race, let's maintain a sequence number per cgroup which gets incremented on __cgroup_kill() call. On the fork() side, the cgroup_can_fork() will cache the sequence number locally and recheck it against the cgroup's sequence number at cgroup_post_fork() site. If the sequence numbers mismatch, it means __cgroup_kill() can been called and we should send SIGKILL to the newly created task. Reported-by: Tejun Heo <tj@kernel.org> Closes: https://lore.kernel.org/all/Z5QHE2Qn-QZ6M-KW@slm.duckdns.org/ [1] Fixes: 661ee6280931 ("cgroup: introduce cgroup.kill") Cc: stable@vger.kernel.org # v5.14+ Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev> Reviewed-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-21cgroup: Remove steal time from usage_usecMuhammad Adeel
[ Upstream commit db5fd3cf8bf41b84b577b8ad5234ea95f327c9be ] The CPU usage time is the time when user, system or both are using the CPU. Steal time is the time when CPU is waiting to be run by the Hypervisor. It should not be added to the CPU usage time, hence removing it from the usage_usec entry. Fixes: 936f2a70f2077 ("cgroup: add cpu.stat file to root cgroup") Acked-by: Axel Busch <axel.busch@ibm.com> Acked-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Muhammad Adeel <muhammad.adeel@ibm.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21printk: Fix signed integer overflow when defining LOG_BUF_LEN_MAXKuan-Wei Chiu
[ Upstream commit 3d6f83df8ff2d5de84b50377e4f0d45e25311c7a ] Shifting 1 << 31 on a 32-bit int causes signed integer overflow, which leads to undefined behavior. To prevent this, cast 1 to u32 before performing the shift, ensuring well-defined behavior. This change explicitly avoids any potential overflow by ensuring that the shift occurs on an unsigned 32-bit integer. Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> Acked-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20240928113608.1438087-1-visitorckw@gmail.com Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21sched: Don't try to catch up excess steal time.Suleiman Souhlal
[ Upstream commit 108ad0999085df2366dd9ef437573955cb3f5586 ] When steal time exceeds the measured delta when updating clock_task, we currently try to catch up the excess in future updates. However, this results in inaccurate run times for the future things using clock_task, in some situations, as they end up getting additional steal time that did not actually happen. This is because there is a window between reading the elapsed time in update_rq_clock() and sampling the steal time in update_rq_clock_task(). If the VCPU gets preempted between those two points, any additional steal time is accounted to the outgoing task even though the calculated delta did not actually contain any of that "stolen" time. When this race happens, we can end up with steal time that exceeds the calculated delta, and the previous code would try to catch up that excess steal time in future clock updates, which is given to the next, incoming task, even though it did not actually have any time stolen. This behavior is particularly bad when steal time can be very long, which we've seen when trying to extend steal time to contain the duration that the host was suspended [0]. When this happens, clock_task stays frozen, during which the running task stays running for the whole duration, since its run time doesn't increase. However the race can happen even under normal operation. Ideally we would read the elapsed cpu time and the steal time atomically, to prevent this race from happening in the first place, but doing so is non-trivial. Since the time between those two points isn't otherwise accounted anywhere, neither to the outgoing task nor the incoming task (because the "end of outgoing task" and "start of incoming task" timestamps are the same), I would argue that the right thing to do is to simply drop any excess steal time, in order to prevent these issues. [0] https://lore.kernel.org/kvm/20240820043543.837914-1-suleiman@google.com/ Signed-off-by: Suleiman Souhlal <suleiman@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20241118043745.1857272-1-suleiman@google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21PM: hibernate: Add error handling for syscore_suspend()Wentao Liang
[ Upstream commit e20a70c572539a486dbd91b225fa6a194a5e2122 ] In hibernation_platform_enter(), the code did not check the return value of syscore_suspend(), potentially leading to a situation where syscore_resume() would be called even if syscore_suspend() failed. This could cause unpredictable behavior or system instability. Modify the code sequence in question to properly handle errors returned by syscore_suspend(). If an error occurs in the suspend path, the code now jumps to label 'Enable_irqs' skipping the syscore_resume() call and only enabling interrupts after setting the system state to SYSTEM_RUNNING. Fixes: 40dc166cb5dd ("PM / Core: Introduce struct syscore_ops for core subsystems PM") Signed-off-by: Wentao Liang <vulab@iscas.ac.cn> Link: https://patch.msgid.link/20250119143205.2103-1-vulab@iscas.ac.cn [ rjw: Changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21padata: avoid UAF for reorder_workChen Ridong
[ Upstream commit dd7d37ccf6b11f3d95e797ebe4e9e886d0332600 ] Although the previous patch can avoid ps and ps UAF for _do_serial, it can not avoid potential UAF issue for reorder_work. This issue can happen just as below: crypto_request crypto_request crypto_del_alg padata_do_serial ... padata_reorder // processes all remaining // requests then breaks while (1) { if (!padata) break; ... } padata_do_serial // new request added list_add // sees the new request queue_work(reorder_work) padata_reorder queue_work_on(squeue->work) ... <kworker context> padata_serial_worker // completes new request, // no more outstanding // requests crypto_del_alg // free pd <kworker context> invoke_padata_reorder // UAF of pd To avoid UAF for 'reorder_work', get 'pd' ref before put 'reorder_work' into the 'serial_wq' and put 'pd' ref until the 'serial_wq' finish. Fixes: bbefa1dd6a6d ("crypto: pcrypt - Avoid deadlock by using per-instance padata queues") Signed-off-by: Chen Ridong <chenridong@huawei.com> Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21padata: add pd get/put refcnt helperChen Ridong
[ Upstream commit ae154202cc6a189b035359f3c4e143d5c24d5352 ] Add helpers for pd to get/put refcnt to make code consice. Signed-off-by: Chen Ridong <chenridong@huawei.com> Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Stable-dep-of: dd7d37ccf6b1 ("padata: avoid UAF for reorder_work") Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21padata: fix UAF in padata_reorderChen Ridong
[ Upstream commit e01780ea4661172734118d2a5f41bc9720765668 ] A bug was found when run ltp test: BUG: KASAN: slab-use-after-free in padata_find_next+0x29/0x1a0 Read of size 4 at addr ffff88bbfe003524 by task kworker/u113:2/3039206 CPU: 0 PID: 3039206 Comm: kworker/u113:2 Kdump: loaded Not tainted 6.6.0+ Workqueue: pdecrypt_parallel padata_parallel_worker Call Trace: <TASK> dump_stack_lvl+0x32/0x50 print_address_description.constprop.0+0x6b/0x3d0 print_report+0xdd/0x2c0 kasan_report+0xa5/0xd0 padata_find_next+0x29/0x1a0 padata_reorder+0x131/0x220 padata_parallel_worker+0x3d/0xc0 process_one_work+0x2ec/0x5a0 If 'mdelay(10)' is added before calling 'padata_find_next' in the 'padata_reorder' function, this issue could be reproduced easily with ltp test (pcrypt_aead01). This can be explained as bellow: pcrypt_aead_encrypt ... padata_do_parallel refcount_inc(&pd->refcnt); // add refcnt ... padata_do_serial padata_reorder // pd while (1) { padata_find_next(pd, true); // using pd queue_work_on ... padata_serial_worker crypto_del_alg padata_put_pd_cnt // sub refcnt padata_free_shell padata_put_pd(ps->pd); // pd is freed // loop again, but pd is freed // call padata_find_next, UAF } In the padata_reorder function, when it loops in 'while', if the alg is deleted, the refcnt may be decreased to 0 before entering 'padata_find_next', which leads to UAF. As mentioned in [1], do_serial is supposed to be called with BHs disabled and always happen under RCU protection, to address this issue, add synchronize_rcu() in 'padata_free_shell' wait for all _do_serial calls to finish. [1] https://lore.kernel.org/all/20221028160401.cccypv4euxikusiq@parnassus.localdomain/ [2] https://lore.kernel.org/linux-kernel/jfjz5d7zwbytztackem7ibzalm5lnxldi2eofeiczqmqs2m7o6@fq426cwnjtkm/ Fixes: b128a3040935 ("padata: allocate workqueue internally") Signed-off-by: Chen Ridong <chenridong@huawei.com> Signed-off-by: Qu Zicheng <quzicheng@huawei.com> Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21bpf: Send signals asynchronously if !preemptiblePuranjay Mohan
[ Upstream commit 87c544108b612512b254c8f79aa5c0a8546e2cc4 ] BPF programs can execute in all kinds of contexts and when a program running in a non-preemptible context uses the bpf_send_signal() kfunc, it will cause issues because this kfunc can sleep. Change `irqs_disabled()` to `!preemptible()`. Reported-by: syzbot+97da3d7e0112d59971de@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/67486b09.050a0220.253251.0084.GAE@google.com/ Fixes: 1bc7896e9ef4 ("bpf: Fix deadlock with rq_lock in bpf_send_signal()") Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20250115103647.38487-1-puranjay@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21padata: fix sysfs store callback checkThomas Weißschuh
[ Upstream commit 9ff6e943bce67d125781fe4780a5d6f072dc44c0 ] padata_sysfs_store() was copied from padata_sysfs_show() but this check was not adapted. Today there is no attribute which can fail this check, but if there is one it may as well be correct. Fixes: 5e017dc3f8bc ("padata: Added sysfs primitives to padata subsystem") Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21cpufreq: schedutil: Fix superfluous updates caused by need_freq_updateSultan Alsawaf (unemployed)
[ Upstream commit 8e461a1cb43d69d2fc8a97e61916dce571e6bb31 ] A redundant frequency update is only truly needed when there is a policy limits change with a driver that specifies CPUFREQ_NEED_UPDATE_LIMITS. In spite of that, drivers specifying CPUFREQ_NEED_UPDATE_LIMITS receive a frequency update _all the time_, not just for a policy limits change, because need_freq_update is never cleared. Furthermore, ignore_dl_rate_limit()'s usage of need_freq_update also leads to a redundant frequency update, regardless of whether or not the driver specifies CPUFREQ_NEED_UPDATE_LIMITS, when the next chosen frequency is the same as the current one. Fix the superfluous updates by only honoring CPUFREQ_NEED_UPDATE_LIMITS when there's a policy limits change, and clearing need_freq_update when a requisite redundant update occurs. This is neatly achieved by moving up the CPUFREQ_NEED_UPDATE_LIMITS test and instead setting need_freq_update to false in sugov_update_next_freq(). Fixes: 600f5badb78c ("cpufreq: schedutil: Don't skip freq update when limits change") Signed-off-by: Sultan Alsawaf (unemployed) <sultan@kerneltoast.com> Reviewed-by: Christian Loehle <christian.loehle@arm.com> Link: https://patch.msgid.link/20241212015734.41241-2-sultan@kerneltoast.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21genirq: Make handle_enforce_irqctx() unconditionally availableThomas Gleixner
[ Upstream commit 8d187a77f04c14fb459a5301d69f733a5a1396bc ] Commit 1b57d91b969c ("irqchip/gic-v2, v3: Prevent SW resends entirely") sett the flag which enforces interrupt handling in interrupt context and prevents software base resends for ARM GIC v2/v3. But it missed that the helper function which checks the flag was hidden behind CONFIG_GENERIC_PENDING_IRQ, which is not set by ARM[64]. Make the helper unconditionally available so that the enforcement actually works. Fixes: 1b57d91b969c ("irqchip/gic-v2, v3: Prevent SW resends entirely") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20241210101811.497716609@linutronix.de Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21sched/fair: Fix value reported by hot tasks pulled in /proc/schedstatPeter Zijlstra
[ Upstream commit a430d99e349026d53e2557b7b22bd2ebd61fe12a ] In /proc/schedstat, lb_hot_gained reports the number hot tasks pulled during load balance. This value is incremented in can_migrate_task() if the task is migratable and hot. After incrementing the value, load balancer can still decide not to migrate this task leading to wrong accounting. Fix this by incrementing stats when hot tasks are detached. This issue only exists in detach_tasks() where we can decide to not migrate hot task even if it is migratable. However, in detach_one_task(), we migrate it unconditionally. [Swapnil: Handled the case where nr_failed_migrations_hot was not accounted properly and wrote commit log] Fixes: d31980846f96 ("sched: Move up affinity check to mitigate useless redoing overhead") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reported-by: "Gautham R. Shenoy" <gautham.shenoy@amd.com> Not-yet-signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Swapnil Sapkal <swapnil.sapkal@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20241220063224.17767-2-swapnil.sapkal@amd.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-21sched/psi: Use task->psi_flags to clear in CPU migrationChengming Zhou
[ Upstream commit 52b33d87b9197c51e8ffdc61873739d90dd0a16f ] The commit d583d360a620 ("psi: Fix psi state corruption when schedule() races with cgroup move") fixed a race problem by making cgroup_move_task() use task->psi_flags instead of looking at the scheduler state. We can extend task->psi_flags usage to CPU migration, which should be a minor optimization for performance and code simplicity. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Link: https://lore.kernel.org/r/20220926081931.45420-1-zhouchengming@bytedance.com Stable-dep-of: a430d99e3490 ("sched/fair: Fix value reported by hot tasks pulled in /proc/schedstat") Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-07Merge branch 'v6.1/standard/base' into v6.1/standard/preempt-rt/baseBruce Ashfield
2025-02-07Merge tag 'v6.1.128' into v6.1/standard/baseBruce Ashfield
This is the 6.1.128 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmeeW64ACgkQONu9yGCS # aT7czw//aWjxxI2pzqzfMpl9exM21H0pW+LSGQevVSp4dw5Pacihx3G4p1tyAQn/ # 6PIv2EC39sXld3SraOgqui0B5Ku7DNfjxVzelUKdG4DPunDrFRAMv6vVJYlE9YxD # zBw0wj3yauxS1VLhARdGRICo+2dIN5l2JjTu1RoC/3fQrz5V+KIR1JJDDuSahUHj # U+egsCWNo41A9c62hKN+5GeV3cZBQ4PcClhokfxzB6yFB8OeBbh/l4Sx49R+EnXg # w8+DJLPwirEkE9M5/ODIbLtCABnA7NIDxzLY/LgucKLm8wZFy3hKjPEzKMfimjOj # k4SH73AG3zwfC1FzP92COftCYqw2q4/sL7tLyWUncksmQ+VgwLaAcvxCnOFm6bGN # 7ctnJqqN5D/l+9DJYKpVyDvB5zVR6+Avl3WNz84vCFnjqCjnub/COr0TaKNoLft/ # PgzlOeNZhjQEQat1h6zqehcafX5UoOyRUP3CGORQsPY36M+juHz/NtX23UT10mz/ # /ovIXRonhXRivby0XAT/uLIRJCrJA940RNsNAbUsMS4sh0ZSP1+S0g6w3c3QLkx4 # O2bnuTdxtz2JjPk2oXDFEEtSIor0n9oeqv9VaIXsh+81fuYxnbGWNNjbNJZWYN+7 # qidol2KLE+8Ww98qZYguqC/RQ1l1owAi5mxWgMPQfoRfrL6M/Ec= # =Z+Bh # -----END PGP SIGNATURE----- # gpg: Signature made Sat 01 Feb 2025 12:36:46 PM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2025-02-01softirq: Allow raising SCHED_SOFTIRQ from SMP-call-function on RT kernelK Prateek Nayak
commit 6675ce20046d149e1e1ffe7e9577947dee17aad5 upstream. do_softirq_post_smp_call_flush() on PREEMPT_RT kernels carries a WARN_ON_ONCE() for any SOFTIRQ being raised from an SMP-call-function. Since do_softirq_post_smp_call_flush() is called with preempt disabled, raising a SOFTIRQ during flush_smp_call_function_queue() can lead to longer preempt disabled sections. Since commit b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()") IPIs to an idle CPU in TIF_POLLING_NRFLAG mode can be optimized out by instead setting TIF_NEED_RESCHED bit in idle task's thread_info and relying on the flush_smp_call_function_queue() in the idle-exit path to run the SMP-call-function. To trigger an idle load balancing, the scheduler queues nohz_csd_function() responsible for triggering an idle load balancing on a target nohz idle CPU and sends an IPI. Only now, this IPI is optimized out and the SMP-call-function is executed from flush_smp_call_function_queue() in do_idle() which can raise a SCHED_SOFTIRQ to trigger the balancing. So far, this went undetected since, the need_resched() check in nohz_csd_function() would make it bail out of idle load balancing early as the idle thread does not clear TIF_POLLING_NRFLAG before calling flush_smp_call_function_queue(). The need_resched() check was added with the intent to catch a new task wakeup, however, it has recently discovered to be unnecessary and will be removed in the subsequent commit after which nohz_csd_function() can raise a SCHED_SOFTIRQ from flush_smp_call_function_queue() to trigger an idle load balance on an idle target in TIF_POLLING_NRFLAG mode. nohz_csd_function() bails out early if "idle_cpu()" check for the target CPU, and does not lock the target CPU's rq until the very end, once it has found tasks to run on the CPU and will not inhibit the wakeup of, or running of a newly woken up higher priority task. Account for this and prevent a WARN_ON_ONCE() when SCHED_SOFTIRQ is raised from flush_smp_call_function_queue(). Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20241119054432.6405-2-kprateek.nayak@amd.com Tested-by: Felix Moessbauer <felix.moessbauer@siemens.com> Signed-off-by: Florian Bezdeka <florian.bezdeka@siemens.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-01-27Merge branch 'v6.1/standard/base' into v6.1/standard/preempt-rt/baseBruce Ashfield
2025-01-27Merge tag 'v6.1.127' into v6.1/standard/baseBruce Ashfield
This is the 6.1.127 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmeSbD0ACgkQONu9yGCS # aT7NXw/+Juqn/gJ2pINjAfHzTU9nfFPj1U26i+sSgPJ9gQ2DjvqU2lpAxx6AuDqL # L2QesL6XXR1qJFTa82JikOCDrF9cyebRaCvHDo8lHzlHK+FMqgcqvI1iXu7cq9sG # 9STZ/m82dbHjAvo7m0g79hSGyGDTHS9WCidIc4vgA8X9wLdGJD+XaZA35BiuTCZP # hroZNqoD0wGBRmAOGy9W7C1sVzSbRXQ0xqmGmF/AndszVXKuyDUAGWC+ntjgi1q7 # SNo6Jh9h27peOU4tog0+17/Y3/719qgWxlekeWwnkjUI30Pm8zCtW7uVz3p/mWEo # mQcK96f519yyuzzOPZ2ZbNuX9mT9zPvtkRwT3ml0CBbyDIWpTMAeuHH6PF70r7cB # H/zLLYAmfQEfJaouOe9abPv6AA562gsoRbk+41bUq4bSVpHMzOxl8tcWDWQLPO2T # r0Z/m5OrNiBlIH+Um7pLYat4mJV05YynnS/C7FCRxMK0f2LuBEEiLiSVnfkRW9zF # Gj8hI/wxQFw6DW5vKzDU0q682l4C8qB+TFCv/eVmhgeRZtRDhwMnv0qjfNkes7qY # cGVFuhE3Oofq8qc/APNgDM5V2ZccE4/Dmmqy4nN1UC5wMQQ64s60cVnIiXlyvNGb # c10nHFUPjBF3YZv9X07tl7ILGLdeOd1WBi6M/ZdFs3by9/jdlnU= # =/JAS # -----END PGP SIGNATURE----- # gpg: Signature made Thu 23 Jan 2025 11:20:13 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2025-01-23hrtimers: Handle CPU state correctly on hotplugKoichiro Den
commit 2f8dea1692eef2b7ba6a256246ed82c365fdc686 upstream. Consider a scenario where a CPU transitions from CPUHP_ONLINE to halfway through a CPU hotunplug down to CPUHP_HRTIMERS_PREPARE, and then back to CPUHP_ONLINE: Since hrtimers_prepare_cpu() does not run, cpu_base.hres_active remains set to 1 throughout. However, during a CPU unplug operation, the tick and the clockevents are shut down at CPUHP_AP_TICK_DYING. On return to the online state, for instance CFS incorrectly assumes that the hrtick is already active, and the chance of the clockevent device to transition to oneshot mode is also lost forever for the CPU, unless it goes back to a lower state than CPUHP_HRTIMERS_PREPARE once. This round-trip reveals another issue; cpu_base.online is not set to 1 after the transition, which appears as a WARN_ON_ONCE in enqueue_hrtimer(). Aside of that, the bulk of the per CPU state is not reset either, which means there are dangling pointers in the worst case. Address this by adding a corresponding startup() callback, which resets the stale per CPU state and sets the online flag. [ tglx: Make the new callback unconditionally available, remove the online modification in the prepare() callback and clear the remaining state in the starting callback instead of the prepare callback ] Fixes: 5c0930ccaad5 ("hrtimers: Push pending hrtimers away from outgoing CPU earlier") Signed-off-by: Koichiro Den <koichiro.den@canonical.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20241220134421.3809834-1-koichiro.den@canonical.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-01-23kheaders: Ignore silly-rename filesDavid Howells
[ Upstream commit 973b710b8821c3401ad7a25360c89e94b26884ac ] Tell tar to ignore silly-rename files (".__afs*" and ".nfs*") when building the header archive. These occur when a file that is open is unlinked locally, but hasn't yet been closed. Such files are visible to the user via the getdents() syscall and so programs may want to do things with them. During the kernel build, such files may be made during the processing of header files and the cleanup may get deferred by fput() which may result in tar seeing these files when it reads the directory, but they may have disappeared by the time it tries to open them, causing tar to fail with an error. Further, we don't want to include them in the tarball if they still exist. With CONFIG_HEADERS_INSTALL=y, something like the following may be seen: find: './kernel/.tmp_cpio_dir/include/dt-bindings/reset/.__afs2080': No such file or directory tar: ./include/linux/greybus/.__afs3C95: File removed before we read it The find warning doesn't seem to cause a problem. Fix this by telling tar when called from in gen_kheaders.sh to exclude such files. This only affects afs and nfs; cifs uses the Windows Hidden attribute to prevent the file from being seen. Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20241213135013.2964079-2-dhowells@redhat.com cc: Masahiro Yamada <masahiroy@kernel.org> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-kernel@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-01-20Merge branch 'v6.1/standard/base' into v6.1/standard/preempt-rt/baseBruce Ashfield
2025-01-20Merge tag 'v6.1.125' into v6.1/standard/baseBruce Ashfield
This is the 6.1.125 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmeKTnYACgkQONu9yGCS # aT48Jw//cvecImU/1YbmlohLKg00gtLtC3JlFEukx0dfpDdZ5igRCAtwu2g/tJLq # BiprRBAQr2Z1oSxqAUppnwCkqNmqe4kAsXt7U2Boc1OmoxntK5BDAKZ6R6HrG0vB # LzduybB+k4qupWqBy3W7oQsDmgxVgyjqLzSeMNpGfeu5sa3yqClqdy+qpZB2ijwc # 8VqZlKi3a0Miz/Qn/oFLtEO0zVfWI0NFhgxBJmiS2L5L6N3ng8pqS8agm4CMUjmG # tyfTIxI7wIvNbT7i1shr4M0RuILo8zohs1utgJj9bcLFLfrp5a5gAuXJ6jcYl66f # /tMcvQW0G2NTg5pYCaNFXYQvEb3CockTl4kSVywcEDlZSg0EoANnOs+ZRqgf5rsA # BedOmHkcu+FWr12XUH88Be8KxHBn7Er0Sdv38AhKmmQFZL2v4PMnBaX4lfN0ctSZ # 0F+1uRiK8S+2Oe/k13buwXIms/q2Jh1M8qyicUgyik/RapasU6o8ejaIdr9DKXKr # InpHk4YAiXjAhFkzrTZ8rrM5f8pLj9r6hKIJU8/ZYeWXjkt7QNixrwnl0sg6g5gI # dRKQBqkfeM+H8VpmgI8rMSv7GrDIXwT22qKqQMOG5kl7ZW3tIL5r+ejRxLcUuNIG # vzQ0Jr0bsES9IiFZoYHcHzx9lmoxLLFVE6zKWnAjN766XnU9i9A= # =gf/m # -----END PGP SIGNATURE----- # gpg: Signature made Fri 17 Jan 2025 07:35:02 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2025-01-17bpf: Fix overloading of MEM_UNINIT's meaningDaniel Borkmann
commit 8ea607330a39184f51737c6ae706db7fdca7628e upstream. Lonial reported an issue in the BPF verifier where check_mem_size_reg() has the following code: if (!tnum_is_const(reg->var_off)) /* For unprivileged variable accesses, disable raw * mode so that the program is required to * initialize all the memory that the helper could * just partially fill up. */ meta = NULL; This means that writes are not checked when the register containing the size of the passed buffer has not a fixed size. Through this bug, a BPF program can write to a map which is marked as read-only, for example, .rodata global maps. The problem is that MEM_UNINIT's initial meaning that "the passed buffer to the BPF helper does not need to be initialized" which was added back in commit 435faee1aae9 ("bpf, verifier: add ARG_PTR_TO_RAW_STACK type") got overloaded over time with "the passed buffer is being written to". The problem however is that checks such as the above which were added later via 06c1c049721a ("bpf: allow helpers access to variable memory") set meta to NULL in order force the user to always initialize the passed buffer to the helper. Due to the current double meaning of MEM_UNINIT, this bypasses verifier write checks to the memory (not boundary checks though) and only assumes the latter memory is read instead. Fix this by reverting MEM_UNINIT back to its original meaning, and having MEM_WRITE as an annotation to BPF helpers in order to then trigger the BPF verifier checks for writing to memory. Some notes: check_arg_pair_ok() ensures that for ARG_CONST_SIZE{,_OR_ZERO} we can access fn->arg_type[arg - 1] since it must contain a preceding ARG_PTR_TO_MEM. For check_mem_reg() the meta argument can be removed altogether since we do check both BPF_READ and BPF_WRITE. Same for the equivalent check_kfunc_mem_size_reg(). Fixes: 7b3552d3f9f6 ("bpf: Reject writes for PTR_TO_MAP_KEY in check_helper_mem_access") Fixes: 97e6d7dab1ca ("bpf: Check PTR_TO_MEM | MEM_RDONLY in check_helper_mem_access") Fixes: 15baa55ff5b0 ("bpf/verifier: allow all functions to read user provided context") Reported-by: Lonial Con <kongln9170@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241021152809.33343-2-daniel@iogearbox.net Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: BRUNO VERNAY <bruno.vernay@se.com> Signed-off-by: Hugo SIMELIERE <hsimeliere.opensource@witekio.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-01-17bpf: Add MEM_WRITE attributeDaniel Borkmann
commit 6fad274f06f038c29660aa53fbad14241c9fd976 upstream. Add a MEM_WRITE attribute for BPF helper functions which can be used in bpf_func_proto to annotate an argument type in order to let the verifier know that the helper writes into the memory passed as an argument. In the past MEM_UNINIT has been (ab)used for this function, but the latter merely tells the verifier that the passed memory can be uninitialized. There have been bugs with overloading the latter but aside from that there are also cases where the passed memory is read + written which currently cannot be expressed, see also 4b3786a6c539 ("bpf: Zero former ARG_PTR_TO_{LONG,INT} args in case of error"). Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241021152809.33343-1-daniel@iogearbox.net Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: BRUNO VERNAY <bruno.vernay@se.com> Signed-off-by: Hugo SIMELIERE <hsimeliere.opensource@witekio.com> Stable-dep-of: 8ea607330a39 ("bpf: Fix overloading of MEM_UNINIT's meaning") Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-01-12Merge branch 'v6.1/standard/base' into v6.1/standard/preempt-rt/baseBruce Ashfield
2025-01-12Merge tag 'v6.1.124' into v6.1/standard/baseBruce Ashfield
This is the 6.1.124 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmd/wVYACgkQONu9yGCS # aT61nA/+Ol4UntqAYAGHy9rCQ3o8x2QaRRf34j/SYrohMLWHobgCiTsXRf/WFCu7 # 1r6HhTJ7NNI40+CUfsE9tw8ZAWYWvnlttEOWtTeZOown4SWeC1UMlfvJb7bZ5m3e # D9ptAxLxfWx1hWFMORbLY62s+ikX8bzkZLmCxVy4cH+axrqh3LXEgs+dDIbl/z+G # OkrRV99YYqmWx+lno4DZ7M3MJyb8AHtZEqpDI9Yw6reWcyie1em3oDZpXUtb1V3x # jbhTHnSPVH5JUOu7z9yDd+EvyAHb7hK4maBxzirTOQKlGH5EwAqr1+neYm01xRra # eIAXdHXNBuj0EHHRIW1bWoJF7Ww0XFDP3t7di3duzW8Tmjr+xIOqrtJPMS45jWeC # VQVKNuuzK0dxIMHFnnONyLJnqutAMJZwnuurtn4NqeN27jsxfCYnqbYdLb5m751d # eXX0YPTIPV4rKs7Wmt5wddwcxX9ALesROvdyLa8YPCc8ozBlk3PZvUBuWZe6YoAq # FhLjJlX7X/NyLBc6FtgjJYFnFIUMud9LkESyk1Y2OrUEiWntT5+jitvkx7zL/J6P # 0DlaKrs4cjhHTNs18c1H+q8rHDnWeWpAfg6ZvG8cVNIr/YRrPrxPQat8o9Pp3mm3 # Oss0Qci8b96lVt1vQV27I690gPPabSitq+0yOnNF/dUE4acfxw4= # =a5Yp # -----END PGP SIGNATURE----- # gpg: Signature made Thu 09 Jan 2025 07:30:14 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2025-01-09kcov: mark in_softirq_really() as __always_inlineArnd Bergmann
commit cb0ca08b326aa03f87fe94bb91872ce8d2ef1ed8 upstream. If gcc decides not to inline in_softirq_really(), objtool warns about a function call with UACCESS enabled: kernel/kcov.o: warning: objtool: __sanitizer_cov_trace_pc+0x1e: call to in_softirq_really() with UACCESS enabled kernel/kcov.o: warning: objtool: check_kcov_mode+0x11: call to in_softirq_really() with UACCESS enabled Mark this as __always_inline to avoid the problem. Link: https://lkml.kernel.org/r/20241217071814.2261620-1-arnd@kernel.org Fixes: 7d4df2dad312 ("kcov: properly check for softirq context") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Marco Elver <elver@google.com> Cc: Aleksandr Nogikh <nogikh@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-01-09bpf: fix potential error returnAnton Protopopov
[ Upstream commit c4441ca86afe4814039ee1b32c39d833c1a16bbc ] The bpf_remove_insns() function returns WARN_ON_ONCE(error), where error is a result of bpf_adj_branches(), and thus should be always 0 However, if for any reason it is not 0, then it will be converted to boolean by WARN_ON_ONCE and returned to user space as 1, not an actual error value. Fix this by returning the original err after the WARN check. Signed-off-by: Anton Protopopov <aspsk@isovalent.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241210114245.836164-1-aspsk@isovalent.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-01-09tracing: Have process_string() also allow arraysSteven Rostedt
commit afc6717628f959941d7b33728570568b4af1c4b8 upstream. In order to catch a common bug where a TRACE_EVENT() TP_fast_assign() assigns an address of an allocated string to the ring buffer and then references it in TP_printk(), which can be executed hours later when the string is free, the function test_event_printk() runs on all events as they are registered to make sure there's no unwanted dereferencing. It calls process_string() to handle cases in TP_printk() format that has "%s". It returns whether or not the string is safe. But it can have some false positives. For instance, xe_bo_move() has: TP_printk("move_lacks_source:%s, migrate object %p [size %zu] from %s to %s device_id:%s", __entry->move_lacks_source ? "yes" : "no", __entry->bo, __entry->size, xe_mem_type_to_name[__entry->old_placement], xe_mem_type_to_name[__entry->new_placement], __get_str(device_id)) Where the "%s" references into xe_mem_type_to_name[]. This is an array of pointers that should be safe for the event to access. Instead of flagging this as a bad reference, if a reference points to an array, where the record field is the index, consider it safe. Link: https://lore.kernel.org/all/9dee19b6185d325d0e6fa5f7cbba81d007d99166.camel@sapience.com/ Cc: stable@vger.kernel.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/20241231000646.324fb5f7@gandalf.local.home Fixes: 65a25d9f7ac02 ("tracing: Add "%s" check in test_event_printk()") Reported-by: Genes Lists <lists@sapience.com> Tested-by: Gene C <arch@sapience.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-01-02Merge branch 'v6.1/standard/base' into v6.1/standard/preempt-rt/baseBruce Ashfield
2025-01-02Merge tag 'v6.1.123' into v6.1/standard/baseBruce Ashfield
This is the 6.1.123 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmd2XNYACgkQONu9yGCS # aT6NWQ//fZUf0ql/y91uztaPX1quldfAy1aDvK9l5B7y7hFJAwCdwnr9OT+vQhL+ # IC632ETrCVK8hX7T4Wr4/iF4Dx/Pgy2Na/T2xNh8fc7K+unK28dewIekqXtMuSIx # viIJNziHVm5HVlAUZBpdGQtJoeq83bXSbAprRbD4RIHgGTYbWlvXIe+tZcDEbsob # pNtMhPYPTzTk0tT0lM8BXUD5ik1BHWvjLiiTLLAwpSU+Qs/tzR95TEvfYJ92sHn5 # hZt6pPinYyv7A8qM8uWLX1YSVxM6L9obIaV4Srl5otI0Dtnl0cG5cn+zjNBGhUHN # vyMr+yWiD+a/lcWdvI5LbkJQaENRkS3knt4rlCOe4JTNA8d3ix+wiALXmI+FpDy3 # WJQr6V5gaKD6grmjGxh5wOAkWm8peVyw06KpeV3Yta04J7RucacSCUbTRbetTbPx # AboSd75I846GtULvPaqLkAplM1vtewnkCXbB3r32pJRmD8YxTCS3+zKlneMWJdVF # 6Nl9NMm1In1kw4cElGHH37XNgpIYnt8le55nrlHk1MFsjXm0yYjEqpm+UTouSrLi # cvcfMjSUW5lHZ94M2alutz/VQ5O1J6iwdusSzlSOCtZJlB6hUIhlUW5Ua1rdWJ8i # fv7uEctEHYySW/Bdb8HCU8HWi0lMGM8PaIdgggQu7ZpPUFZqiW4= # =hhui # -----END PGP SIGNATURE----- # gpg: Signature made Thu 02 Jan 2025 04:31:02 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2025-01-02Merge tag 'v6.1.122' into v6.1/standard/baseBruce Ashfield
This is the 6.1.122 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmdupDwACgkQONu9yGCS # aT7fUhAAu64UVIkxyRJdHU2KfcrCH0rzPIod4iRYfYu+9YmL6ZUMwo6QLLUj/Uqf # U1MVd1BNuxZWWl1wTywCv6p1EHV3A20SApj4Spi134EcceahR1ngN3FKOxVccgkb # QC00rEXkP6cZx14F2CQR/cwtauhYdFyoNZLCWobrXAAUHH0diWmeHG1AjTgErPIy # pCj9s3k8++EE0G2+854JEFGq17kIqGA4jcLQjZK3yQTixCUoeyGrgMLBZMwqr1X4 # +8CiX9znFP4E/Ic9OCKVGGzy5yfzMWLgQ7ojyh5fnVxq6Ipchy+yWBPsjtVwAiKC # fv7XyN7NYAarjdc0SFaH3dzdbcO6StGgJYfd2KyFr2oBBPm6EDYeL8isITfNtoXh # 54h9GcxZqEP77gF3h2vcGUEa3pdkS6ke+OQt82j0WeFFNzRslh1fQW0FvzUEWdHX # Rg8BnZTw8LWR0TzMTYifl1N930jegEuYWalc6VhqY3DilPy1fcP+R7ie0nxdd/35 # yxw56e9MvByIpi7x1fsNJAu4KJjeO5y23ApZQ+DRivuoVogWEvnxTN/cwQbu8S4x # sbELzTPVSHWs8wg+x9FTcYMijOLoTqA0TxicdvZKa7FjZXMpjRUlGtacIMi280gk # JFHdXtl2vCU2e9fIJi0+9Odgv9Gb5Njav2vTwBxJ5r5e6Lnn84Y= # =XV1G # -----END PGP SIGNATURE----- # gpg: Signature made Fri 27 Dec 2024 07:57:32 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key