aboutsummaryrefslogtreecommitdiffstats
path: root/kernel
AgeCommit message (Collapse)Author
2024-03-28Merge tag 'v5.4.273' into v5.4/standard/baseBruce Ashfield
Linux 5.4.273 # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCgAdFiEE4n5dijQDou9mhzu83qZv95d3LNwFAmYDT24ACgkQ3qZv95d3 # LNwsuhAAj/Ar2XNkDzi3TSj7CFF7Qh7P2m8YA3XLrwTBkFBgBteJN4GF7/HON/pU # 7drEcLWodiQf/7HYKUiseEiU4d6g5f818+HdFFMpWrlW0DxrfWG+WoX4NgWiXEtk # 0NgaWTgUkuYHemhevv3ohUQ6AuqddUXwJ4kp+xEZiTvcHF2F4pIOGjKjYX2PGzsT # jCeEDMhmFn4PBr88c1dmgpYUNBVJxv7FIiorELuKChgfAOfyWmcHD1ckA/6fCAkR # L2oX/pEBxUU2ADB+0HmXFUPxIXqyMhQmxL0fe7n6VGIpjq2yH9u79tUAwqOhCMcD # aQ8Lg9HJt9Z2HX7EryC1nxeLdD0IKI1OcG6lOkDoLMPYLpcTm3RJrbv9wRHdrsYI # 3XJZVpMBi7Jp1M1y1jVhYGP4H85oKAxoHFcI1M9YRn0Q9pk42QCt1pw3CS/NWN4G # JEz45CXJhi66EN/lKfr7d3bmhjGYnSweWUILPXNu+3YwM8FPgIoZV5LlNvi8dQg8 # PpbtYbZek3/5s/LSoHFvdlxv4UEmrLb+Rta55iJrIjOK7yG1gj64h2jA+6LnlBEL # L7EB1Jy5HwdQCVlyabmOZA8QouVJkGLifQ1SQJnV3V5ftFgCvURXtGv/9aYugFFH # 6J8Yzos7yMIY4RuEyZ7TLg29nU4/iQD4wfKC1TVpEu1WtRFXkKA= # =/Slq # -----END PGP SIGNATURE----- # gpg: Signature made Tue 26 Mar 2024 06:42:54 PM EDT # gpg: using RSA key E27E5D8A3403A2EF66873BBCDEA66FF797772CDC # gpg: Can't check signature: No public key
2024-03-28Merge tag 'v5.4.272' into v5.4/standard/baseBruce Ashfield
Linux 5.4.272 # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCgAdFiEE4n5dijQDou9mhzu83qZv95d3LNwFAmX0nCUACgkQ3qZv95d3 # LNzTSg/8D8YrvES4AQlmO6GRMBb/06gCxskgyG88s2EfIFfrC70P6v8vcq5ZOWow # mR/dxoIxz1rmzkW45zgc/64ihXAI3hpMAw5XM9kGIVD0/oH2PhFcZRb2ybouZX3V # Ts6aepLw/jzlSmTZAIBfaMH7KHU1HPJpuQ0vwOYn1KZrysFi4xROKeYl4lWY2I6x # 4fQSWe5y3z3PY0X/XBn2OPFqbGlD6FwWX5SX1M3kFfFg+aC5GPfvS+DyQy1+GClb # 0zBPM4ph4c59laj2fFm5W2k2MWuWTYxZNW7PXvgkEt9dFqAhVr6XXvTdZa09u6o4 # jIrGOkwheVMFQ7VgH/VdjpIWnTrDscAYYdAnuihPBQTYDRpn01mHMUtVV+nvdgnD # QMhwNRrv9MiaaFim2OmTs0AgCmYMw38D3+DrlwUQjqWqpojGW1AM6xGNeIUN9Hrw # lcqy4jqkW08x+af9HZ9488S1csCzamwYLkj1UE0m2ZQb/vSNEtyFbLSd9sjcR8Lv # g+fuVgvWIq6xuee00P/JhRgCASRjBrACV/uyzp0t/ka3R97qD04Gw+tZdd15oyF7 # HkK17SjhYGLFALZnVbTQVqqu2ISzXeplipUVxJbhB2xkcf0ZQh98lGFxGv/MbmaL # 58Jyia4KAx1YRK5X9XAoQzQcLW7j3ErNWzMFKd1kJw3UqbOEQso= # =tsbn # -----END PGP SIGNATURE----- # gpg: Signature made Fri 15 Mar 2024 03:06:13 PM EDT # gpg: using RSA key E27E5D8A3403A2EF66873BBCDEA66FF797772CDC # gpg: Can't check signature: No public key
2024-03-28Merge tag 'v5.4.270' into v5.4/standard/baseBruce Ashfield
This is the 5.4.270 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmXhxnYACgkQONu9yGCS # aT5+zA//YASY5iWEz26+3UpKxBwlHje+6BOqm5JOO420iIT75NcPe7RG4GXkQWtM # xwS7VGUjFFFgq+vE5bq4L8WMkBc1djBhcPmavIx9qNb2QiN0y60WRGblHJRaSnOz # q+y9PA0OrMMI9ctC7x2jI37cUK/6YCYrNsXRLyjJ8GF6Xsnfqe0ZJsAWKpjUaEOb # tYUvDbKgpT2kL9YpKCJuk7Z1BnDA84fN3TAwrh659eEcxSqFlUxvVCvWrACWKVQz # FCpULyqScpJLytG9PGQIxrMQ4GSaH5FAZ4KKPu4UsSkFOStiWIx0zm4DEm52+x3q # OPWvGW8mavDo+C+LNt96B4lcvYn3dITvY5yvryTHXbJQCyvrP3dpJcMzlasdJmaS # zR7NZEmQFKTijn76J5xbPePw2MG9edKMLAVukhj78ioWBGhwOXeSmIStUrpsd74r # Xv4YCXgige3O7MDaZub3rXB9vgr+yj8U/f327z91WyrYzJpwsZVTFRWJcjJF1WHG # 6V6SgtkX2vfoV6wbWtK7dq/nhyA7RSEESXq+g8mKOQbGcnRpuQam81tNJgzhEUPi # gUq/Sj1+L8AOaKIhePAiVVqe9fT6UIxoLa1JCVI5JqXM2uyErOqUHOVcnzCDHjm/ # V1IVdbjJPJ6fUUYij6v0wJpAUMkM1nNqWurmMSxSPjBKZ2TUSJs= # =7VkG # -----END PGP SIGNATURE----- # gpg: Signature made Fri 01 Mar 2024 07:13:42 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2024-03-28Merge tag 'v5.4.269' into v5.4/standard/baseBruce Ashfield
This is the 5.4.269 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmXYSHgACgkQONu9yGCS # aT74ag/6AqWJBzK/2xvUCYjfBU5+4ApFWQt47Ly9MKFhFX7YBjQGXS6av1YFA9Kw # i01R9SCpIv2eaDrM7/J0wvXGybemfvQ8VyNngG30QC/0jTc4ZAj0PEbtyHpUaz4F # HWOFfAlHAYcLQWhmjhXitoGUfeyhchWnQZpn45mkT0i3DSAEFc5gsiMlO+jaM8No # hOaAHEpGsd7zlH32NYpWrFI0i54HSCwlaHlQFJ7U+rbWyG935RdLjMAX+488R8oc # KccOj+xb4zQyASdC7qdgPz02U7Tm3UB5s0lLrviDiBrYVxSe77vw2TBEeejF+7ZE # oYqjsygRYmRbKuI55rxyxph7cT93ZctL48DZJ4fT4zVIT9kak3S/NtFs0Hyr3TkY # N6ZlDnd10cj8QsnXXtTd9QgT7Ind+3KySv7sr4a/gLO/N39EYpztrMCc/lKfG/Bu # MPDMXBrEtKkjMelcnISwac9QcOb/MAJaepCWtYgcXbEcaBP+/Or8OM0yZPOEk7SA # 3CamE+ou0Ds/c6gnsBw6WDMTJd+sX6sw6+4cMEaWzaWiE12Ryc0gscCDJXjEAYzc # +47PiPijNJ+iPjsos8ZaNnTQHALemgJ4cjolHivsEvAYU1s5cyKjVEgMB1MJN8ib # y19D9L8T9BtG2ukWBxtIXMIt51VZ7B8fXodcYXbyqtV25JZj/k8= # =cJfu # -----END PGP SIGNATURE----- # gpg: Signature made Fri 23 Feb 2024 02:25:44 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2024-03-26bpf: report RCU QS in cpumap kthreadYan Zhai
[ Upstream commit 00bf63122459e87193ee7f1bc6161c83a525569f ] When there are heavy load, cpumap kernel threads can be busy polling packets from redirect queues and block out RCU tasks from reaching quiescent states. It is insufficient to just call cond_resched() in such context. Periodically raise a consolidated RCU QS before cond_resched fixes the problem. Fixes: 6710e1126934 ("bpf: introduce new bpf cpu map type BPF_MAP_TYPE_CPUMAP") Reviewed-by: Jesper Dangaard Brouer <hawk@kernel.org> Signed-off-by: Yan Zhai <yan@cloudflare.com> Acked-by: Paul E. McKenney <paulmck@kernel.org> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Link: https://lore.kernel.org/r/c17b9f1517e19d813da3ede5ed33ee18496bb5d8.1710877680.git.yan@cloudflare.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-26bpf: Fix stackmap overflow check on 32-bit archesToke Høiland-Jørgensen
[ Upstream commit 7a4b21250bf79eef26543d35bd390448646c536b ] The stackmap code relies on roundup_pow_of_two() to compute the number of hash buckets, and contains an overflow check by checking if the resulting value is 0. However, on 32-bit arches, the roundup code itself can overflow by doing a 32-bit left-shift of an unsigned long value, which is undefined behaviour, so it is not guaranteed to truncate neatly. This was triggered by syzbot on the DEVMAP_HASH type, which contains the same check, copied from the hashtab code. The commit in the fixes tag actually attempted to fix this, but the fix did not account for the UB, so the fix only works on CPUs where an overflow does result in a neat truncation to zero, which is not guaranteed. Checking the value before rounding does not have this problem. Fixes: 6183f4d3a0a2 ("bpf: Check for integer overflow when using roundup_pow_of_two()") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Reviewed-by: Bui Quang Minh <minhquangbui99@gmail.com> Message-ID: <20240307120340.99577-4-toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-26bpf: Fix hashtab overflow check on 32-bit archesToke Høiland-Jørgensen
[ Upstream commit 6787d916c2cf9850c97a0a3f73e08c43e7d973b1 ] The hashtab code relies on roundup_pow_of_two() to compute the number of hash buckets, and contains an overflow check by checking if the resulting value is 0. However, on 32-bit arches, the roundup code itself can overflow by doing a 32-bit left-shift of an unsigned long value, which is undefined behaviour, so it is not guaranteed to truncate neatly. This was triggered by syzbot on the DEVMAP_HASH type, which contains the same check, copied from the hashtab code. So apply the same fix to hashtab, by moving the overflow check to before the roundup. Fixes: daaf427c6ab3 ("bpf: fix arraymap NULL deref and missing overflow and zero size checks") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Message-ID: <20240307120340.99577-3-toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-26bpf: Mark bpf_spin_{lock,unlock}() helpers with notrace correctlyYonghong Song
[ Upstream commit 178c54666f9c4d2f49f2ea661d0c11b52f0ed190 ] Currently tracing is supposed not to allow for bpf_spin_{lock,unlock}() helper calls. This is to prevent deadlock for the following cases: - there is a prog (prog-A) calling bpf_spin_{lock,unlock}(). - there is a tracing program (prog-B), e.g., fentry, attached to bpf_spin_lock() and/or bpf_spin_unlock(). - prog-B calls bpf_spin_{lock,unlock}(). For such a case, when prog-A calls bpf_spin_{lock,unlock}(), a deadlock will happen. The related source codes are below in kernel/bpf/helpers.c: notrace BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock) notrace BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock) notrace is supposed to prevent fentry prog from attaching to bpf_spin_{lock,unlock}(). But actually this is not the case and fentry prog can successfully attached to bpf_spin_lock(). Siddharth Chintamaneni reported the issue in [1]. The following is the macro definition for above BPF_CALL_1: #define BPF_CALL_x(x, name, ...) \ static __always_inline \ u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__)); \ typedef u64 (*btf_##name)(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__)); \ u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__)); \ u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__)) \ { \ return ((btf_##name)____##name)(__BPF_MAP(x,__BPF_CAST,__BPF_N,__VA_ARGS__));\ } \ static __always_inline \ u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__)) #define BPF_CALL_1(name, ...) BPF_CALL_x(1, name, __VA_ARGS__) The notrace attribute is actually applied to the static always_inline function ____bpf_spin_{lock,unlock}(). The actual callback function bpf_spin_{lock,unlock}() is not marked with notrace, hence allowing fentry prog to attach to two helpers, and this may cause the above mentioned deadlock. Siddharth Chintamaneni actually has a reproducer in [2]. To fix the issue, a new macro NOTRACE_BPF_CALL_1 is introduced which will add notrace attribute to the original function instead of the hidden always_inline function and this fixed the problem. [1] https://lore.kernel.org/bpf/CAE5sdEigPnoGrzN8WU7Tx-h-iFuMZgW06qp0KHWtpvoXxf1OAQ@mail.gmail.com/ [2] https://lore.kernel.org/bpf/CAE5sdEg6yUc_Jz50AnUXEEUh6O73yQ1Z6NV2srJnef0ZrQkZew@mail.gmail.com/ Fixes: d83525ca62cf ("bpf: introduce bpf_spin_lock") Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20240207070102.335167-1-yonghong.song@linux.dev Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-26bpf: Factor out bpf_spin_lock into helpers.Alexei Starovoitov
[ Upstream commit c1b3fed319d32a721d4b9c17afaeb430444ff773 ] Move ____bpf_spin_lock/unlock into helpers to make it more clear that quadruple underscore bpf_spin_lock/unlock are irqsave/restore variants. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210715005417.78572-3-alexei.starovoitov@gmail.com Stable-dep-of: 178c54666f9c ("bpf: Mark bpf_spin_{lock,unlock}() helpers with notrace correctly") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-26timekeeping: Fix cross-timestamp interpolation for non-x86Peter Hilber
[ Upstream commit 14274d0bd31b4debf28284604589f596ad2e99f2 ] So far, get_device_system_crosststamp() unconditionally passes system_counterval.cycles to timekeeping_cycles_to_ns(). But when interpolating system time (do_interp == true), system_counterval.cycles is before tkr_mono.cycle_last, contrary to the timekeeping_cycles_to_ns() expectations. On x86, CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE will mitigate on interpolating, setting delta to 0. With delta == 0, xtstamp->sys_monoraw and xtstamp->sys_realtime are then set to the last update time, as implicitly expected by adjust_historical_crosststamp(). On other architectures, the resulting nonsense xtstamp->sys_monoraw and xtstamp->sys_realtime corrupt the xtstamp (ts) adjustment in adjust_historical_crosststamp(). Fix this by deriving xtstamp->sys_monoraw and xtstamp->sys_realtime from the last update time when interpolating, by using the local variable "cycles". The local variable already has the right value when interpolating, unlike system_counterval.cycles. Fixes: 2c756feb18d9 ("time: Add history to cross timestamp interface supporting slower devices") Signed-off-by: Peter Hilber <peter.hilber@opensynergy.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: John Stultz <jstultz@google.com> Link: https://lore.kernel.org/r/20231218073849.35294-4-peter.hilber@opensynergy.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-26timekeeping: Fix cross-timestamp interpolation corner case decisionPeter Hilber
[ Upstream commit 87a41130881995f82f7adbafbfeddaebfb35f0ef ] The cycle_between() helper checks if parameter test is in the open interval (before, after). Colloquially speaking, this also applies to the counter wrap-around special case before > after. get_device_system_crosststamp() currently uses cycle_between() at the first call site to decide whether to interpolate for older counter readings. get_device_system_crosststamp() has the following problem with cycle_between() testing against an open interval: Assume that, by chance, cycles == tk->tkr_mono.cycle_last (in the following, "cycle_last" for brevity). Then, cycle_between() at the first call site, with effective argument values cycle_between(cycle_last, cycles, now), returns false, enabling interpolation. During interpolation, get_device_system_crosststamp() will then call cycle_between() at the second call site (if a history_begin was supplied). The effective argument values are cycle_between(history_begin->cycles, cycles, cycles), since system_counterval.cycles == interval_start == cycles, per the assumption. Due to the test against the open interval, cycle_between() returns false again. This causes get_device_system_crosststamp() to return -EINVAL. This failure should be avoided, since get_device_system_crosststamp() works both when cycles follows cycle_last (no interpolation), and when cycles precedes cycle_last (interpolation). For the case cycles == cycle_last, interpolation is actually unneeded. Fix this by changing cycle_between() into timestamp_in_interval(), which now checks against the closed interval, rather than the open interval. This changes the get_device_system_crosststamp() behavior for three corner cases: 1. Bypass interpolation in the case cycles == tk->tkr_mono.cycle_last, fixing the problem described above. 2. At the first timestamp_in_interval() call site, cycles == now no longer causes failure. 3. At the second timestamp_in_interval() call site, history_begin->cycles == system_counterval.cycles no longer causes failure. adjust_historical_crosststamp() also works for this corner case, where partial_history_cycles == total_history_cycles. These behavioral changes should not cause any problems. Fixes: 2c756feb18d9 ("time: Add history to cross timestamp interface supporting slower devices") Signed-off-by: Peter Hilber <peter.hilber@opensynergy.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20231218073849.35294-3-peter.hilber@opensynergy.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-26timekeeping: Fix cross-timestamp interpolation on counter wrapPeter Hilber
[ Upstream commit 84dccadd3e2a3f1a373826ad71e5ced5e76b0c00 ] cycle_between() decides whether get_device_system_crosststamp() will interpolate for older counter readings. cycle_between() yields wrong results for a counter wrap-around where after < before < test, and for the case after < test < before. Fix the comparison logic. Fixes: 2c756feb18d9 ("time: Add history to cross timestamp interface supporting slower devices") Signed-off-by: Peter Hilber <peter.hilber@opensynergy.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: John Stultz <jstultz@google.com> Link: https://lore.kernel.org/r/20231218073849.35294-2-peter.hilber@opensynergy.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-15getrusage: use sig->stats_lock rather than lock_task_sighand()Oleg Nesterov
[ Upstream commit f7ec1cd5cc7ef3ad964b677ba82b8b77f1c93009 ] lock_task_sighand() can trigger a hard lockup. If NR_CPUS threads call getrusage() at the same time and the process has NR_THREADS, spin_lock_irq will spin with irqs disabled O(NR_CPUS * NR_THREADS) time. Change getrusage() to use sig->stats_lock, it was specifically designed for this type of use. This way it runs lockless in the likely case. TODO: - Change do_task_stat() to use sig->stats_lock too, then we can remove spin_lock_irq(siglock) in wait_task_zombie(). - Turn sig->stats_lock into seqcount_rwlock_t, this way the readers in the slow mode won't exclude each other. See https://lore.kernel.org/all/20230913154907.GA26210@redhat.com/ - stats_lock has to disable irqs because ->siglock can be taken in irq context, it would be very nice to change __exit_signal() to avoid the siglock->stats_lock dependency. Link: https://lkml.kernel.org/r/20240122155053.GA26214@redhat.com Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reported-by: Dylan Hatch <dylanbhatch@google.com> Tested-by: Dylan Hatch <dylanbhatch@google.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-15getrusage: use __for_each_thread()Oleg Nesterov
[ Upstream commit 13b7bc60b5353371460a203df6c38ccd38ad7a3a ] do/while_each_thread should be avoided when possible. Plus this change allows to avoid lock_task_sighand(), we can use rcu and/or sig->stats_lock instead. Link: https://lkml.kernel.org/r/20230909172629.GA20454@redhat.com Signed-off-by: Oleg Nesterov <oleg@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: f7ec1cd5cc7e ("getrusage: use sig->stats_lock rather than lock_task_sighand()") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-15getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()Oleg Nesterov
[ Upstream commit daa694e4137571b4ebec330f9a9b4d54aa8b8089 ] Patch series "getrusage: use sig->stats_lock", v2. This patch (of 2): thread_group_cputime() does its own locking, we can safely shift thread_group_cputime_adjusted() which does another for_each_thread loop outside of ->siglock protected section. This is also preparation for the next patch which changes getrusage() to use stats_lock instead of siglock, thread_group_cputime() takes the same lock. With the current implementation recursive read_seqbegin_or_lock() is fine, thread_group_cputime() can't enter the slow mode if the caller holds stats_lock, yet this looks more safe and better performance-wise. Link: https://lkml.kernel.org/r/20240122155023.GA26169@redhat.com Link: https://lkml.kernel.org/r/20240122155050.GA26205@redhat.com Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reported-by: Dylan Hatch <dylanbhatch@google.com> Tested-by: Dylan Hatch <dylanbhatch@google.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-15getrusage: add the "signal_struct *sig" local variableOleg Nesterov
[ Upstream commit c7ac8231ace9b07306d0299969e42073b189c70a ] No functional changes, cleanup/preparation. Link: https://lkml.kernel.org/r/20230909172554.GA20441@redhat.com Signed-off-by: Oleg Nesterov <oleg@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: daa694e41375 ("getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-15y2038: rusage: use __kernel_old_timevalArnd Bergmann
[ Upstream commit bdd565f817a74b9e30edec108f7cb1dbc762b8a6 ] There are two 'struct timeval' fields in 'struct rusage'. Unfortunately the definition of timeval is now ambiguous when used in user space with a libc that has a 64-bit time_t, and this also changes the 'rusage' definition in user space in a way that is incompatible with the system call interface. While there is no good solution to avoid all ambiguity here, change the definition in the kernel headers to be compatible with the kernel ABI, using __kernel_old_timeval as an unambiguous base type. In previous discussions, there was also a plan to add a replacement for rusage based on 64-bit timestamps and nanosecond resolution, i.e. 'struct __kernel_timespec'. I have patches for that as well, if anyone thinks we should do that. Reviewed-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Stable-dep-of: daa694e41375 ("getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-01sched/rt: Disallow writing invalid values to sched_rt_period_usCyril Hrubis
commit 079be8fc630943d9fc70a97807feb73d169ee3fc upstream. The validation of the value written to sched_rt_period_us was broken because: - the sysclt_sched_rt_period is declared as unsigned int - parsed by proc_do_intvec() - the range is asserted after the value parsed by proc_do_intvec() Because of this negative values written to the file were written into a unsigned integer that were later on interpreted as large positive integers which did passed the check: if (sysclt_sched_rt_period <= 0) return EINVAL; This commit fixes the parsing by setting explicit range for both perid_us and runtime_us into the sched_rt_sysctls table and processes the values with proc_dointvec_minmax() instead. Alternatively if we wanted to use full range of unsigned int for the period value we would have to split the proc_handler and use proc_douintvec() for it however even the Documentation/scheduller/sched-rt-group.rst describes the range as 1 to INT_MAX. As far as I can tell the only problem this causes is that the sysctl file allows writing negative values which when read back may confuse userspace. There is also a LTP test being submitted for these sysctl files at: http://patchwork.ozlabs.org/project/ltp/patch/20230901144433.2526-1-chrubis@suse.cz/ Signed-off-by: Cyril Hrubis <chrubis@suse.cz> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20231002115553.3007-2-chrubis@suse.cz [ pvorel: rebased for 5.4 ] Reviewed-by: Petr Vorel <pvorel@suse.cz> Signed-off-by: Petr Vorel <pvorel@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-03-01sched/rt: Fix sysctl_sched_rr_timeslice intial valueCyril Hrubis
commit c7fcb99877f9f542c918509b2801065adcaf46fa upstream. There is a 10% rounding error in the intial value of the sysctl_sched_rr_timeslice with CONFIG_HZ_300=y. This was found with LTP test sched_rr_get_interval01: sched_rr_get_interval01.c:57: TPASS: sched_rr_get_interval() passed sched_rr_get_interval01.c:64: TPASS: Time quantum 0s 99999990ns sched_rr_get_interval01.c:72: TFAIL: /proc/sys/kernel/sched_rr_timeslice_ms != 100 got 90 sched_rr_get_interval01.c:57: TPASS: sched_rr_get_interval() passed sched_rr_get_interval01.c:64: TPASS: Time quantum 0s 99999990ns sched_rr_get_interval01.c:72: TFAIL: /proc/sys/kernel/sched_rr_timeslice_ms != 100 got 90 What this test does is to compare the return value from the sched_rr_get_interval() and the sched_rr_timeslice_ms sysctl file and fails if they do not match. The problem it found is the intial sysctl file value which was computed as: static int sysctl_sched_rr_timeslice = (MSEC_PER_SEC / HZ) * RR_TIMESLICE; which works fine as long as MSEC_PER_SEC is multiple of HZ, however it introduces 10% rounding error for CONFIG_HZ_300: (MSEC_PER_SEC / HZ) * (100 * HZ / 1000) (1000 / 300) * (100 * 300 / 1000) 3 * 30 = 90 This can be easily fixed by reversing the order of the multiplication and division. After this fix we get: (MSEC_PER_SEC * (100 * HZ / 1000)) / HZ (1000 * (100 * 300 / 1000)) / 300 (1000 * 30) / 300 = 100 Fixes: 975e155ed873 ("sched/rt: Show the 'sched_rr_timeslice' SCHED_RR timeslice tuning knob in milliseconds") Signed-off-by: Cyril Hrubis <chrubis@suse.cz> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Petr Vorel <pvorel@suse.cz> Acked-by: Mel Gorman <mgorman@suse.de> Tested-by: Petr Vorel <pvorel@suse.cz> Link: https://lore.kernel.org/r/20230802151906.25258-2-chrubis@suse.cz [ pvorel: rebased for 5.4 ] Signed-off-by: Petr Vorel <pvorel@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-03-01sched/rt: sysctl_sched_rr_timeslice show default timeslice after resetCyril Hrubis
commit c1fc6484e1fb7cc2481d169bfef129a1b0676abe upstream. The sched_rr_timeslice can be reset to default by writing value that is <= 0. However after reading from this file we always got the last value written, which is not useful at all. $ echo -1 > /proc/sys/kernel/sched_rr_timeslice_ms $ cat /proc/sys/kernel/sched_rr_timeslice_ms -1 Fix this by setting the variable that holds the sysctl file value to the jiffies_to_msecs(RR_TIMESLICE) in case that <= 0 value was written. Signed-off-by: Cyril Hrubis <chrubis@suse.cz> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Petr Vorel <pvorel@suse.cz> Acked-by: Mel Gorman <mgorman@suse.de> Tested-by: Petr Vorel <pvorel@suse.cz> Cc: Mahmoud Adam <mngyadam@amazon.com> Link: https://lore.kernel.org/r/20230802151906.25258-3-chrubis@suse.cz Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-02-23sched/membarrier: reduce the ability to hammer on sys_membarrierLinus Torvalds
commit 944d5fe50f3f03daacfea16300e656a1691c4a23 upstream. On some systems, sys_membarrier can be very expensive, causing overall slowdowns for everything. So put a lock on the path in order to serialize the accesses to prevent the ability for this to be called at too high of a frequency and saturate the machine. Reviewed-and-tested-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: Borislav Petkov <bp@alien8.de> Fixes: 22e4ebb97582 ("membarrier: Provide expedited private command") Fixes: c5f58bd58f43 ("membarrier: Provide GLOBAL_EXPEDITED command") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> [ converted to explicit mutex_*() calls - cleanup.h is not in this stable branch - gregkh ] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-02-23tracing: Inform kmemleak of saved_cmdlines allocationSteven Rostedt (Google)
commit 2394ac4145ea91b92271e675a09af2a9ea6840b7 upstream. The allocation of the struct saved_cmdlines_buffer structure changed from: s = kmalloc(sizeof(*s), GFP_KERNEL); s->saved_cmdlines = kmalloc_array(TASK_COMM_LEN, val, GFP_KERNEL); to: orig_size = sizeof(*s) + val * TASK_COMM_LEN; order = get_order(orig_size); size = 1 << (order + PAGE_SHIFT); page = alloc_pages(GFP_KERNEL, order); if (!page) return NULL; s = page_address(page); memset(s, 0, sizeof(*s)); s->saved_cmdlines = kmalloc_array(TASK_COMM_LEN, val, GFP_KERNEL); Where that s->saved_cmdlines allocation looks to be a dangling allocation to kmemleak. That's because kmemleak only keeps track of kmalloc() allocations. For allocations that use page_alloc() directly, the kmemleak needs to be explicitly informed about it. Add kmemleak_alloc() and kmemleak_free() around the page allocation so that it doesn't give the following false positive: unreferenced object 0xffff8881010c8000 (size 32760): comm "swapper", pid 0, jiffies 4294667296 hex dump (first 32 bytes): ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ backtrace (crc ae6ec1b9): [<ffffffff86722405>] kmemleak_alloc+0x45/0x80 [<ffffffff8414028d>] __kmalloc_large_node+0x10d/0x190 [<ffffffff84146ab1>] __kmalloc+0x3b1/0x4c0 [<ffffffff83ed7103>] allocate_cmdlines_buffer+0x113/0x230 [<ffffffff88649c34>] tracer_alloc_buffers.isra.0+0x124/0x460 [<ffffffff8864a174>] early_trace_init+0x14/0xa0 [<ffffffff885dd5ae>] start_kernel+0x12e/0x3c0 [<ffffffff885f5758>] x86_64_start_reservations+0x18/0x30 [<ffffffff885f582b>] x86_64_start_kernel+0x7b/0x80 [<ffffffff83a001c3>] secondary_startup_64_no_verify+0x15e/0x16b Link: https://lore.kernel.org/linux-trace-kernel/87r0hfnr9r.fsf@kernel.org/ Link: https://lore.kernel.org/linux-trace-kernel/20240214112046.09a322d6@gandalf.local.home Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Fixes: 44dc5c41b5b1 ("tracing: Fix wasted memory in saved_cmdlines logic") Reported-by: Kalle Valo <kvalo@kernel.org> Tested-by: Kalle Valo <kvalo@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-02-23ring-buffer: Clean ring_buffer_poll_wait() error returnVincent Donnefort
commit 66bbea9ed6446b8471d365a22734dc00556c4785 upstream. The return type for ring_buffer_poll_wait() is __poll_t. This is behind the scenes an unsigned where we can set event bits. In case of a non-allocated CPU, we do return instead -EINVAL (0xffffffea). Lucky us, this ends up setting few error bits (EPOLLERR | EPOLLHUP | EPOLLNVAL), so user-space at least is aware something went wrong. Nonetheless, this is an incorrect code. Replace that -EINVAL with a proper EPOLLERR to clean that output. As this doesn't change the behaviour, there's no need to treat this change as a bug fix. Link: https://lore.kernel.org/linux-trace-kernel/20240131140955.3322792-1-vdonnefort@google.com Cc: stable@vger.kernel.org Fixes: 6721cb6002262 ("ring-buffer: Do not poll non allocated cpu buffers") Signed-off-by: Vincent Donnefort <vdonnefort@google.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-02-23tracing: Fix wasted memory in saved_cmdlines logicSteven Rostedt (Google)
commit 44dc5c41b5b1267d4dd037d26afc0c4d3a568acb upstream. While looking at improving the saved_cmdlines cache I found a huge amount of wasted memory that should be used for the cmdlines. The tracing data saves pids during the trace. At sched switch, if a trace occurred, it will save the comm of the task that did the trace. This is saved in a "cache" that maps pids to comms and exposed to user space via the /sys/kernel/tracing/saved_cmdlines file. Currently it only caches by default 128 comms. The structure that uses this creates an array to store the pids using PID_MAX_DEFAULT (which is usually set to 32768). This causes the structure to be of the size of 131104 bytes on 64 bit machines. In hex: 131104 = 0x20020, and since the kernel allocates generic memory in powers of two, the kernel would allocate 0x40000 or 262144 bytes to store this structure. That leaves 131040 bytes of wasted space. Worse, the structure points to an allocated array to store the comm names, which is 16 bytes times the amount of names to save (currently 128), which is 2048 bytes. Instead of allocating a separate array, make the structure end with a variable length string and use the extra space for that. This is similar to a recommendation that Linus had made about eventfs_inode names: https://lore.kernel.org/all/20240130190355.11486-5-torvalds@linux-foundation.org/ Instead of allocating a separate string array to hold the saved comms, have the structure end with: char saved_cmdlines[]; and round up to the next power of two over sizeof(struct saved_cmdline_buffers) + num_cmdlines * TASK_COMM_LEN It will use this extra space for the saved_cmdline portion. Now, instead of saving only 128 comms by default, by using this wasted space at the end of the structure it can save over 8000 comms and even saves space by removing the need for allocating the other array. Link: https://lore.kernel.org/linux-trace-kernel/20240209063622.1f7b6d5f@rorschach.local.home Cc: stable@vger.kernel.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Vincent Donnefort <vdonnefort@google.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Mete Durlu <meted@linux.ibm.com> Fixes: 939c7a4f04fcd ("tracing: Introduce saved_cmdlines_size file") Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-02-23tracing/trigger: Fix to return error if failed to alloc snapshotMasami Hiramatsu (Google)
commit 0958b33ef5a04ed91f61cef4760ac412080c4e08 upstream. Fix register_snapshot_trigger() to return error code if it failed to allocate a snapshot instead of 0 (success). Unless that, it will register snapshot trigger without an error. Link: https://lore.kernel.org/linux-trace-kernel/170622977792.270660.2789298642759362200.stgit@devnote2 Fixes: 0bbe7f719985 ("tracing: Fix the race between registering 'snapshot' event trigger and triggering 'snapshot' operation") Cc: stable@vger.kernel.org Cc: Vincent Donnefort <vdonnefort@google.com> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-02-23hrtimer: Report offline hrtimer enqueueFrederic Weisbecker
commit dad6a09f3148257ac1773cd90934d721d68ab595 upstream. The hrtimers migration on CPU-down hotplug process has been moved earlier, before the CPU actually goes to die. This leaves a small window of opportunity to queue an hrtimer in a blind spot, leaving it ignored. For example a practical case has been reported with RCU waking up a SCHED_FIFO task right before the CPUHP_AP_IDLE_DEAD stage, queuing that way a sched/rt timer to the local offline CPU. Make sure such situations never go unnoticed and warn when that happens. Fixes: 5c0930ccaad5 ("hrtimers: Push pending hrtimers away from outgoing CPU earlier") Reported-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20240129235646.3171983-4-boqun.feng@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-02-23perf: Fix the nr_addr_filters fixPeter Zijlstra
[ Upstream commit 388a1fb7da6aaa1970c7e2a7d7fcd983a87a8484 ] Thomas reported that commit 652ffc2104ec ("perf/core: Fix narrow startup race when creating the perf nr_addr_filters sysfs file") made the entire attribute group vanish, instead of only the nr_addr_filters attribute. Additionally a stray return. Insufficient coffee was involved with both writing and merging the patch. Fixes: 652ffc2104ec ("perf/core: Fix narrow startup race when creating the perf nr_addr_filters sysfs file") Reported-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Thomas Richter <tmricht@linux.ibm.com> Link: https://lkml.kernel.org/r/20231122100756.GP8262@noisy.programming.kicks-ass.net Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-02-23bpf: Add map and need_defer parameters to .map_fd_put_ptr()Hou Tao
[ Upstream commit 20c20bd11a0702ce4dc9300c3da58acf551d9725 ] map is the pointer of outer map, and need_defer needs some explanation. need_defer tells the implementation to defer the reference release of the passed element and ensure that the element is still alive before the bpf program, which may manipulate it, exits. The following three cases will invoke map_fd_put_ptr() and different need_defer values will be passed to these callers: 1) release the reference of the old element in the map during map update or map deletion. The release must be deferred, otherwise the bpf program may incur use-after-free problem, so need_defer needs to be true. 2) release the reference of the to-be-added element in the error path of map update. The to-be-added element is not visible to any bpf program, so it is OK to pass false for need_defer parameter. 3) release the references of all elements in the map during map release. Any bpf program which has access to the map must have been exited and released, so need_defer=false will be OK. These two parameters will be used by the following patches to fix the potential use-after-free problem for map-in-map. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20231204140425.1480317-3-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-02-23audit: Send netlink ACK before setting connection in auditd_setChris Riches
[ Upstream commit 022732e3d846e197539712e51ecada90ded0572a ] When auditd_set sets the auditd_conn pointer, audit messages can immediately be put on the socket by other kernel threads. If the backlog is large or the rate is high, this can immediately fill the socket buffer. If the audit daemon requested an ACK for this operation, a full socket buffer causes the ACK to get dropped, also setting ENOBUFS on the socket. To avoid this race and ensure ACKs get through, fast-track the ACK in this specific case to ensure it is sent before auditd_conn is set. Signed-off-by: Chris Riches <chris.riches@nutanix.com> [PM: fix some tab vs space damage] Signed-off-by: Paul Moore <paul@paul-moore.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-02-23perf/core: Fix narrow startup race when creating the perf nr_addr_filters ↵Greg KH
sysfs file [ Upstream commit 652ffc2104ec1f69dd4a46313888c33527145ccf ] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/2023061204-decal-flyable-6090@gregkh Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-02-23tick/sched: Preserve number of idle sleeps across CPU hotplug eventsTim Chen
commit 9a574ea9069be30b835a3da772c039993c43369b upstream. Commit 71fee48f ("tick-sched: Fix idle and iowait sleeptime accounting vs CPU hotplug") preserved total idle sleep time and iowait sleeptime across CPU hotplug events. Similar reasoning applies to the number of idle calls and idle sleeps to get the proper average of sleep time per idle invocation. Preserve those fields too. Fixes: 71fee48f ("tick-sched: Fix idle and iowait sleeptime accounting vs CPU hotplug") Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20240122233534.3094238-1-tim.c.chen@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-02-23tracing: Ensure visibility when inserting an element into tracing_mapPetr Pavlu
[ Upstream commit 2b44760609e9eaafc9d234a6883d042fc21132a7 ] Running the following two commands in parallel on a multi-processor AArch64 machine can sporadically produce an unexpected warning about duplicate histogram entries: $ while true; do echo hist:key=id.syscall:val=hitcount > \ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist sleep 0.001 done $ stress-ng --sysbadaddr $(nproc) The warning looks as follows: [ 2911.172474] ------------[ cut here ]------------ [ 2911.173111] Duplicates detected: 1 [ 2911.173574] WARNING: CPU: 2 PID: 12247 at kernel/trace/tracing_map.c:983 tracing_map_sort_entries+0x3e0/0x408 [ 2911.174702] Modules linked in: iscsi_ibft(E) iscsi_boot_sysfs(E) rfkill(E) af_packet(E) nls_iso8859_1(E) nls_cp437(E) vfat(E) fat(E) ena(E) tiny_power_button(E) qemu_fw_cfg(E) button(E) fuse(E) efi_pstore(E) ip_tables(E) x_tables(E) xfs(E) libcrc32c(E) aes_ce_blk(E) aes_ce_cipher(E) crct10dif_ce(E) polyval_ce(E) polyval_generic(E) ghash_ce(E) gf128mul(E) sm4_ce_gcm(E) sm4_ce_ccm(E) sm4_ce(E) sm4_ce_cipher(E) sm4(E) sm3_ce(E) sm3(E) sha3_ce(E) sha512_ce(E) sha512_arm64(E) sha2_ce(E) sha256_arm64(E) nvme(E) sha1_ce(E) nvme_core(E) nvme_auth(E) t10_pi(E) sg(E) scsi_mod(E) scsi_common(E) efivarfs(E) [ 2911.174738] Unloaded tainted modules: cppc_cpufreq(E):1 [ 2911.180985] CPU: 2 PID: 12247 Comm: cat Kdump: loaded Tainted: G E 6.7.0-default #2 1b58bbb22c97e4399dc09f92d309344f69c44a01 [ 2911.182398] Hardware name: Amazon EC2 c7g.8xlarge/, BIOS 1.0 11/1/2018 [ 2911.183208] pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--) [ 2911.184038] pc : tracing_map_sort_entries+0x3e0/0x408 [ 2911.184667] lr : tracing_map_sort_entries+0x3e0/0x408 [ 2911.185310] sp : ffff8000a1513900 [ 2911.185750] x29: ffff8000a1513900 x28: ffff0003f272fe80 x27: 0000000000000001 [ 2911.186600] x26: ffff0003f272fe80 x25: 0000000000000030 x24: 0000000000000008 [ 2911.187458] x23: ffff0003c5788000 x22: ffff0003c16710c8 x21: ffff80008017f180 [ 2911.188310] x20: ffff80008017f000 x19: ffff80008017f180 x18: ffffffffffffffff [ 2911.189160] x17: 0000000000000000 x16: 0000000000000000 x15: ffff8000a15134b8 [ 2911.190015] x14: 0000000000000000 x13: 205d373432323154 x12: 5b5d313131333731 [ 2911.190844] x11: 00000000fffeffff x10: 00000000fffeffff x9 : ffffd1b78274a13c [ 2911.191716] x8 : 000000000017ffe8 x7 : c0000000fffeffff x6 : 000000000057ffa8 [ 2911.192554] x5 : ffff0012f6c24ec0 x4 : 0000000000000000 x3 : ffff2e5b72b5d000 [ 2911.193404] x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff0003ff254480 [ 2911.194259] Call trace: [ 2911.194626] tracing_map_sort_entries+0x3e0/0x408 [ 2911.195220] hist_show+0x124/0x800 [ 2911.195692] seq_read_iter+0x1d4/0x4e8 [ 2911.196193] seq_read+0xe8/0x138 [ 2911.196638] vfs_read+0xc8/0x300 [ 2911.197078] ksys_read+0x70/0x108 [ 2911.197534] __arm64_sys_read+0x24/0x38 [ 2911.198046] invoke_syscall+0x78/0x108 [ 2911.198553] el0_svc_common.constprop.0+0xd0/0xf8 [ 2911.199157] do_el0_svc+0x28/0x40 [ 2911.199613] el0_svc+0x40/0x178 [ 2911.200048] el0t_64_sync_handler+0x13c/0x158 [ 2911.200621] el0t_64_sync+0x1a8/0x1b0 [ 2911.201115] ---[ end trace 0000000000000000 ]--- The problem appears to be caused by CPU reordering of writes issued from __tracing_map_insert(). The check for the presence of an element with a given key in this function is: val = READ_ONCE(entry->val); if (val && keys_match(key, val->key, map->key_size)) ... The write of a new entry is: elt = get_free_elt(map); memcpy(elt->key, key, map->key_size); entry->val = elt; The "memcpy(elt->key, key, map->key_size);" and "entry->val = elt;" stores may become visible in the reversed order on another CPU. This second CPU might then incorrectly determine that a new key doesn't match an already present val->key and subsequently insert a new element, resulting in a duplicate. Fix the problem by adding a write barrier between "memcpy(elt->key, key, map->key_size);" and "entry->val = elt;", and for good measure, also use WRITE_ONCE(entry->val, elt) for publishing the element. The sequence pairs with the mentioned "READ_ONCE(entry->val);" and the "val->key" check which has an address dependency. The barrier is placed on a path executed when adding an element for a new key. Subsequent updates targeting the same key remain unaffected. From the user's perspective, the issue was introduced by commit c193707dde77 ("tracing: Remove code which merges duplicates"), which followed commit cbf4100efb8f ("tracing: Add support to detect and avoid duplicates"). The previous code operated differently; it inherently expected potential races which result in duplicates but merged them later when they occurred. Link: https://lore.kernel.org/linux-trace-kernel/20240122150928.27725-1-petr.pavlu@suse.com Fixes: c193707dde77 ("tracing: Remove code which merges duplicates") Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> Acked-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-02-23PM: hibernate: Enforce ordering during image compression/decompressionHongchen Zhang
commit 71cd7e80cfde548959952eac7063aeaea1f2e1c6 upstream. An S4 (suspend to disk) test on the LoongArch 3A6000 platform sometimes fails with the following error messaged in the dmesg log: Invalid LZO compressed length That happens because when compressing/decompressing the image, the synchronization between the control thread and the compress/decompress/crc thread is based on a relaxed ordering interface, which is unreliable, and the following situation may occur: CPU 0 CPU 1 save_image_lzo lzo_compress_threadfn atomic_set(&d->stop, 1); atomic_read(&data[thr].stop) data[thr].cmp = data[thr].cmp_len; WRITE data[thr].cmp_len Then CPU0 gets a stale cmp_len and writes it to disk. During resume from S4, wrong cmp_len is loaded. To maintain data consistency between the two threads, use the acquire/release variants of atomic set and read operations. Fixes: 081a9d043c98 ("PM / Hibernate: Improve performance of LZO/plain hibernation, checksum image") Cc: All applicable <stable@vger.kernel.org> Signed-off-by: Hongchen Zhang <zhanghongchen@loongson.cn> Co-developed-by: Weihao Li <liweihao@loongson.cn> Signed-off-by: Weihao Li <liweihao@loongson.cn> [ rjw: Subject rewrite and changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-02-20Merge tag 'v5.4.268' into v5.4/standard/baseBruce Ashfield
This is the 5.4.268 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmWy4hYACgkQONu9yGCS # aT7SVBAAyx1DlSyJWcqzpESH0+VfqyWHxXlKS6Ip5wT0/+t0gglIKkwU/O0FsRXw # pLO24wL0+MuIzgfZZj7wieAOPlGLOonKAvvUHGEMlpfAzyKjmZuW93WLKQlA/Oec # uaT2ooQevRQcgXzbuV1yN/CeCnhbtmiQdcwy6OU5QACfzguQYtDbNGpbVHJEyEIW # khlr+tj1KgRMzh/Sx76RPg4C/hkZBHun3tPcE0lTg+5QZDSkUj5gEdhVOSG2qmSh # Lj9zt/isY3v6Whixel9YoTLr9SukI7ZlKzMrH1kSbGtTW3uZqgqB+7wCi1tWoNE1 # Zwu9/kUe1dU1kfwYW8AA5OwupjBjADVnZZx1cKN3nQZG2J8bSKHwHmuZPx3DGhJ1 # sxlaQ0nGvcEbCKljlIqsHzx2U22YKk939mVz5Y+MZYT5uwWRHI+iH4yRW97putSP # t8tb3uX69Gsl6B+gLu38Mr7kkwyY06xmMnc5dfNCPwh8SxLj3dG7Gft90CNq1JKT # q2cwlMEcDZRlC08kwzD7pRehZ6hYLRlTOv8yhQsQefcfzrtsT18Cec5TI2k72NOe # fbIY8us3Qsr8JVSYuObGqT8LmkX9pkmRozEXgENvwltijEsWULoO2Hs+Z/yD07z8 # RYqtxWxVxFVeHTkrXbbMUTZWhFx5LE+rtxCySpfeFkv0WgRRwa8= # =vkKq # -----END PGP SIGNATURE----- # gpg: Signature made Thu 25 Jan 2024 05:35:02 PM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2024-01-25kdb: Fix a potential buffer overflow in kdb_local()Christophe JAILLET
[ Upstream commit 4f41d30cd6dc865c3cbc1a852372321eba6d4e4c ] When appending "[defcmd]" to 'kdb_prompt_str', the size of the string already in the buffer should be taken into account. An option could be to switch from strncat() to strlcat() which does the correct test to avoid such an overflow. However, this actually looks as dead code, because 'defcmd_in_progress' can't be true here. See a more detailed explanation at [1]. [1]: https://lore.kernel.org/all/CAD=FV=WSh7wKN7Yp-3wWiDgX4E3isQ8uh0LCzTmd1v9Cg9j+nQ@mail.gmail.com/ Fixes: 5d5314d6795f ("kdb: core for kgdb back end (1 of 2)") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-01-25kdb: Censor attempts to set PROMPT without ENABLE_MEM_READDaniel Thompson
[ Upstream commit ad99b5105c0823ff02126497f4366e6a8009453e ] Currently the PROMPT variable could be abused to provoke the printf() machinery to read outside the current stack frame. Normally this doesn't matter becaues md is already a much better tool for reading from memory. However the md command can be disabled by not setting KDB_ENABLE_MEM_READ. Let's also prevent PROMPT from being modified in these circumstances. Whilst adding a comment to help future code reviewers we also remove the #ifdef where PROMPT in consumed. There is no problem passing an unused (0) to snprintf when !CONFIG_SMP. argument Reported-by: Wang Xiayang <xywang.sjtu@sjtu.edu.cn> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Stable-dep-of: 4f41d30cd6dc ("kdb: Fix a potential buffer overflow in kdb_local()") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-01-25tick-sched: Fix idle and iowait sleeptime accounting vs CPU hotplugHeiko Carstens
commit 71fee48fb772ac4f6cfa63dbebc5629de8b4cc09 upstream. When offlining and onlining CPUs the overall reported idle and iowait times as reported by /proc/stat jump backward and forward: cpu 132 0 176 225249 47 6 6 21 0 0 cpu0 80 0 115 112575 33 3 4 18 0 0 cpu1 52 0 60 112673 13 3 1 2 0 0 cpu 133 0 177 226681 47 6 6 21 0 0 cpu0 80 0 116 113387 33 3 4 18 0 0 cpu 133 0 178 114431 33 6 6 21 0 0 <---- jump backward cpu0 80 0 116 114247 33 3 4 18 0 0 cpu1 52 0 61 183 0 3 1 2 0 0 <---- idle + iowait start with 0 cpu 133 0 178 228956 47 6 6 21 0 0 <---- jump forward cpu0 81 0 117 114929 33 3 4 18 0 0 Reason for this is that get_idle_time() in fs/proc/stat.c has different sources for both values depending on if a CPU is online or offline: - if a CPU is online the values may be taken from its per cpu tick_cpu_sched structure - if a CPU is offline the values are taken from its per cpu cpustat structure The problem is that the per cpu tick_cpu_sched structure is set to zero on CPU offline. See tick_cancel_sched_timer() in kernel/time/tick-sched.c. Therefore when a CPU is brought offline and online afterwards both its idle and iowait sleeptime will be zero, causing a jump backward in total system idle and iowait sleeptime. In a similar way if a CPU is then brought offline again the total idle and iowait sleeptimes will jump forward. It looks like this behavior was introduced with commit 4b0c0f294f60 ("tick: Cleanup NOHZ per cpu data on cpu down"). This was only noticed now on s390, since we switched to generic idle time reporting with commit be76ea614460 ("s390/idle: remove arch_cpu_idle_time() and corresponding code"). Fix this by preserving the values of idle_sleeptime and iowait_sleeptime members of the per-cpu tick_sched structure on CPU hotplug. Fixes: 4b0c0f294f60 ("tick: Cleanup NOHZ per cpu data on cpu down") Reported-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20240115163555.1004144-1-hca@linux.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-01-25dma-mapping: clear dev->dma_mem to NULL after freeing itJoakim Zhang
[ Upstream commit b07bc2347672cc8c7293c64499f1488278c5ca3d ] Reproduced with below sequence: dma_declare_coherent_memory()->dma_release_coherent_memory() ->dma_declare_coherent_memory()->"return -EBUSY" error It will return -EBUSY from the dma_assign_coherent_memory() in dma_declare_coherent_memory(), the reason is that dev->dma_mem pointer has not been set to NULL after it's freed. Fixes: cf65a0f6f6ff ("dma-mapping: move all DMA mapping code to kernel/dma") Signed-off-by: Joakim Zhang <joakim.zhang@cixtech.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-01-25bpf, lpm: Fix check prefixlen before walking trieFlorian Lehner
[ Upstream commit 9b75dbeb36fcd9fc7ed51d370310d0518a387769 ] When looking up an element in LPM trie, the condition 'matchlen == trie->max_prefixlen' will never return true, if key->prefixlen is larger than trie->max_prefixlen. Consequently all elements in the LPM trie will be visited and no element is returned in the end. To resolve this, check key->prefixlen first before walking the LPM trie. Fixes: b95a5c4db09b ("bpf: add a longest prefix match trie map implementation") Signed-off-by: Florian Lehner <dev@der-flo.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231105085801.3742-1-dev@der-flo.net Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-01-25ring-buffer: Do not record in NMI if the arch does not support cmpxchg in NMISteven Rostedt (Google)
[ Upstream commit 712292308af2265cd9b126aedfa987f10f452a33 ] As the ring buffer recording requires cmpxchg() to work, if the architecture does not support cmpxchg in NMI, then do not do any recording within an NMI. Link: https://lore.kernel.org/linux-trace-kernel/20231213175403.6fc18540@gandalf.local.home Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-01-25tracing: Add size check when printing trace_marker outputSteven Rostedt (Google)
[ Upstream commit 60be76eeabb3d83858cc6577fc65c7d0f36ffd42 ] If for some reason the trace_marker write does not have a nul byte for the string, it will overflow the print: trace_seq_printf(s, ": %s", field->buf); The field->buf could be missing the nul byte. To prevent overflow, add the max size that the buf can be by using the event size and the field location. int max = iter->ent_size - offsetof(struct print_entry, buf); trace_seq_printf(s, ": %*.s", max, field->buf); Link: https://lore.kernel.org/linux-trace-kernel/20231212084444.4619b8ce@gandalf.local.home Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-01-25tracing: Have large events show up as '[LINE TOO BIG]' instead of nothingSteven Rostedt (Google)
[ Upstream commit b55b0a0d7c4aa2dac3579aa7e6802d1f57445096 ] If a large event was added to the ring buffer that is larger than what the trace_seq can handle, it just drops the output: ~# cat /sys/kernel/tracing/trace # tracer: nop # # entries-in-buffer/entries-written: 2/2 #P:8 # # _-----=> irqs-off/BH-disabled # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / _-=> migrate-disable # |||| / delay # TASK-PID CPU# ||||| TIMESTAMP FUNCTION # | | | ||||| | | <...>-859 [001] ..... 141.118951: tracing_mark_write <...>-859 [001] ..... 141.148201: tracing_mark_write: 78901234 Instead, catch this case and add some context: ~# cat /sys/kernel/tracing/trace # tracer: nop # # entries-in-buffer/entries-written: 2/2 #P:8 # # _-----=> irqs-off/BH-disabled # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / _-=> migrate-disable # |||| / delay # TASK-PID CPU# ||||| TIMESTAMP FUNCTION # | | | ||||| | | <...>-852 [001] ..... 121.550551: tracing_mark_write[LINE TOO BIG] <...>-852 [001] ..... 121.550581: tracing_mark_write: 78901234 This now emulates the same output as trace_pipe. Link: https://lore.kernel.org/linux-trace-kernel/20231209171058.78c1a026@gandalf.local.home Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-01-11Merge tag 'v5.4.266' into v5.4/standard/baseBruce Ashfield
This is the 5.4.266 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmWbzp8ACgkQONu9yGCS # aT6Zug//b3T7xamNVN4IseLrH0FdOl0RkYwOhEl+cH79qn4PnNCuZuch7+RK/tT1 # LRp/8sSKYPP4H+MI3RU2f/KIYcZBgoLHzEkRfpgzxpxg41vvNQF96+Li9xAzhfuy # 9nKBf+AbZFCUrTHs8je13SseMXOHzMQlZDjoJk5m5yCw7LWF0FPeghePK2tSctYY # 8yXdvi3J36wOwpKGihO6RqYvvY2OV+NE2ky/U7Wseo7+1/GsJaMIjMDK/HMon/nv # Y0HB1tmvljzp6rqMw4f1UBvKGEj9ataYOaJzwsmXLcBRAlFjKMSGx2A6/Ad8OGTP # zhHcaXegLvCNSGBNGY9kaXe3eF04e1T58W3yFfz4tN8UvhKipLO5vbScOiDsLyKr # 9oNcFZAYiKX3OvFVEhUS8LL1r+gXUu6wR2gUeR7a02ZQA+Bj56lIRLAmINcHmyFa # Pgtbv1I+foU/kt4ckBxoe68B9kcIbWIfnm/l+Ioy96CENRnXDyuE/bts3dFqbb4a # Hka9JphZ8PfFwe09ZOJ1AN2cbSr/eDo7UPMrI5RRQq4sBMSqFo2B+c4YVWbEVIM/ # xu4ZnLMa04wy6rMbGlkwtgDuyuZu2f22kWkEuYmya0BbrbeH4QVlAq34CEj1wti2 # 4tQkCPErjBfrgRZdI3Qx61Lskg944BV7EuxnwdffmB3mmXBr5V0= # =5A3R # -----END PGP SIGNATURE----- # gpg: Signature made Mon 08 Jan 2024 05:29:51 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2024-01-08ring-buffer: Fix wake ups when buffer_percent is set to 100Steven Rostedt (Google)
commit 623b1f896fa8a669a277ee5a258307a16c7377a3 upstream. The tracefs file "buffer_percent" is to allow user space to set a water-mark on how much of the tracing ring buffer needs to be filled in order to wake up a blocked reader. 0 - is to wait until any data is in the buffer 1 - is to wait for 1% of the sub buffers to be filled 50 - would be half of the sub buffers are filled with data 100 - is not to wake the waiter until the ring buffer is completely full Unfortunately the test for being full was: dirty = ring_buffer_nr_dirty_pages(buffer, cpu); return (dirty * 100) > (full * nr_pages); Where "full" is the value for "buffer_percent". There is two issues with the above when full == 100. 1. dirty * 100 > 100 * nr_pages will never be true That is, the above is basically saying that if the user sets buffer_percent to 100, more pages need to be dirty than exist in the ring buffer! 2. The page that the writer is on is never considered dirty, as dirty pages are only those that are full. When the writer goes to a new sub-buffer, it clears the contents of that sub-buffer. That is, even if the check was ">=" it would still not be equal as the most pages that can be considered "dirty" is nr_pages - 1. To fix this, add one to dirty and use ">=" in the compare. Link: https://lore.kernel.org/linux-trace-kernel/20231226125902.4a057f1d@gandalf.local.home Cc: stable@vger.kernel.org Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Fixes: 03329f9939781 ("tracing: Add tracefs file buffer_percentage") Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-27Merge tag 'v5.4.265' into v5.4/standard/baseBruce Ashfield
This is the 5.4.265 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmWC/R4ACgkQONu9yGCS # aT6XRw/+OE/DwEAaRGgM/gGLayr/n6zJoL7DUhLxkS+wG3beQXFsdigCHRRhTh58 # OCQP6pL6UlbJ8Yge3FtpYqqkR+UyY7c/wsjJI05v7dKUJ742rpFNML9w0Dg9Au8w # k4TsVU01nnr9HC7rY8k8zYZ/DZdULvIX8RNhSOi0CMO2gkdMUFrh/IC0q5JIWKmL # xFmMieGtsr4kl4sP2oUbYihf1Li4oblouBV+70kPViC6XA0YhOSCT0+PfDxp5CuD # sux1srZGY/782zI0O6+ObsYascwgL+wk0oEJRj1vO02tJKKbtEGMJvGO9Mcpto6B # 2YBq40PAhyeKFdt4YzOWCSO7WjvWP7h15U68EY+E6ruy9La+P/dTyhAqsBBTVDEs # PGFIjxc5pnHn72JQ/U3yJoHFM7yW26VEmEGItsd81VermNgqe2scSPSPHIfM0qFU # z2l0PcQkm+SLK2cFDSCBUBaXfx4R2UuWe/QY07K2eN5YCC4mqROajVh4Vqyj1Q8j # PLw/yrt8lOJcDEDMtFq7hcXKMzcb/dYfCZcSfxl6YJeaR4X4ViOkDGVhLEkVeOn5 # K3kyIvPd268rmoy/9jTuDYu6axMhg2eE2dTQqBg8pFwIOgetUwtYcBhyxDtmGZm1 # lNUYmY84BSHZwXuKjNXGgZ5DI0U7nAWis+odR0scHpVKwaC8ta8= # =d0Ht # -----END PGP SIGNATURE----- # gpg: Signature made Wed 20 Dec 2023 09:41:34 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-12-20ring-buffer: Fix memory leak of free pageSteven Rostedt (Google)
commit 17d801758157bec93f26faaf5ff1a8b9a552d67a upstream. Reading the ring buffer does a swap of a sub-buffer within the ring buffer with a empty sub-buffer. This allows the reader to have full access to the content of the sub-buffer that was swapped out without having to worry about contention with the writer. The readers call ring_buffer_alloc_read_page() to allocate a page that will be used to swap with the ring buffer. When the code is finished with the reader page, it calls ring_buffer_free_read_page(). Instead of freeing the page, it stores it as a spare. Then next call to ring_buffer_alloc_read_page() will return this spare instead of calling into the memory management system to allocate a new page. Unfortunately, on freeing of the ring buffer, this spare page is not freed, and causes a memory leak. Link: https://lore.kernel.org/linux-trace-kernel/20231210221250.7b9cc83c@rorschach.local.home Cc: stable@vger.kernel.org Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Fixes: 73a757e63114d ("ring-buffer: Return reader page back into existing ring buffer") Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-20perf: Fix perf_event_validate_size() lockdep splatMark Rutland
commit 7e2c1e4b34f07d9aa8937fab88359d4a0fce468e upstream. When lockdep is enabled, the for_each_sibling_event(sibling, event) macro checks that event->ctx->mutex is held. When creating a new group leader event, we call perf_event_validate_size() on a partially initialized event where event->ctx is NULL, and so when for_each_sibling_event() attempts to check event->ctx->mutex, we get a splat, as reported by Lucas De Marchi: WARNING: CPU: 8 PID: 1471 at kernel/events/core.c:1950 __do_sys_perf_event_open+0xf37/0x1080 This only happens for a new event which is its own group_leader, and in this case there cannot be any sibling events. Thus it's safe to skip the check for siblings, which avoids having to make invasive and ugly changes to for_each_sibling_event(). Avoid the splat by bailing out early when the new event is its own group_leader. Fixes: 382c27f4ed28f803 ("perf: Fix perf_event_validate_size()") Closes: https://lore.kernel.org/lkml/20231214000620.3081018-1-lucas.demarchi@intel.com/ Closes: https://lore.kernel.org/lkml/ZXpm6gQ%2Fd59jGsuW@xpf.sh.intel.com/ Reported-by: Lucas De Marchi <lucas.demarchi@intel.com> Reported-by: Pengfei Xu <pengfei.xu@intel.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20231215112450.3972309-1-mark.rutland@arm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-20cred: switch to using atomic_long_tJens Axboe
commit f8fa5d76925991976b3e7076f9d1052515ec1fca upstream. There are multiple ways to grab references to credentials, and the only protection we have against overflowing it is the memory required to do so. With memory sizes only moving in one direction, let's bump the reference count to 64-bit and move it outside the realm of feasibly overflowing. Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-19Merge tag 'v5.4.264' into v5.4/standard/baseBruce Ashfield
This is the 5.4.264 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmV5528ACgkQONu9yGCS # aT7+DxAAl/t4oGT1Di8mIhCfqsezTj/SQ6HAMaAFpKjGXdgTBX9QavwTHp35qLpv # xZtibdl611oEDT2B3//cvJu9Qs8sJGlDWxpJF8fkF/NED2EHLRwUAOmhc4fyBCBq # 01GfPRefU9G5E0Nw2g3o07dD0otKQzENh74iAzUr/cyju5TBgS4NC7CljiD6GXP8 # DtCcbz2UMmHF5icyasjw7WoCqKaWUn7KLqU3RyQfmDgnh3z0vSXKs17CFoNk1+R6 # EAkZJkrUsIDO3N6aRkJGvwtbEFDws4S7onpcthckXt6IF/OQ+h8LcHvyTpblVeWH # qVBXmj1QZD+SwfP5qVtM+RwHHTE3GVJI3+MVTInu0vzgEz6lhNPILzWwso+AIhHM # +SBZkx4/pO2LbrSgqn+NYWRcBTn4fXNURPd+zLXNJ1ZSnhamWm7kuFK8B36lz0s8 # CB5ngLld1wg6m2NVPneqAeEcD54msGd18iNZvnxfx3c28sb27RgLqLJQKdSrtFMH # tZj0+3KGkYHzWQmt07iLmg1DPAmZSNzMW7MDJRHlqDCKS2gBKO/LKxWiri2WPh+o # IvgrfGzEJsyYXi5+JcGC9+RkoPcGaous4VA2kWFJSjXiXPrJC4jknQm3mtdMiTf3 # V9DEOMvot9ELGI9RODaNsk59C2RhOuVimm2XWV9n88UcX38reAQ= # =gOna # -----END PGP SIGNATURE----- # gpg: Signature made Wed 13 Dec 2023 12:18:39 PM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-12-13perf: Fix perf_event_validate_size()Peter Zijlstra
[ Upstream commit 382c27f4ed28f803b1f1473ac2d8db0afc795a1b ] Budimir noted that perf_event_validate_size() only checks the size of the newly added event, even though the sizes of all existing events can also change due to not all events having the same read_format. When we attach the new event, perf_group_attach(), we do re-compute the size for all events. Fixes: a723968c0ed3 ("perf: Fix u16 overflows") Reported-by: Budimir Markovic <markovicbudimir@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sasha Levin <sashal@kernel.org>