aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/mm
AgeCommit message (Collapse)Author
2024-03-28Merge tag 'v5.4.269' into v5.4/standard/baseBruce Ashfield
This is the 5.4.269 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmXYSHgACgkQONu9yGCS # aT74ag/6AqWJBzK/2xvUCYjfBU5+4ApFWQt47Ly9MKFhFX7YBjQGXS6av1YFA9Kw # i01R9SCpIv2eaDrM7/J0wvXGybemfvQ8VyNngG30QC/0jTc4ZAj0PEbtyHpUaz4F # HWOFfAlHAYcLQWhmjhXitoGUfeyhchWnQZpn45mkT0i3DSAEFc5gsiMlO+jaM8No # hOaAHEpGsd7zlH32NYpWrFI0i54HSCwlaHlQFJ7U+rbWyG935RdLjMAX+488R8oc # KccOj+xb4zQyASdC7qdgPz02U7Tm3UB5s0lLrviDiBrYVxSe77vw2TBEeejF+7ZE # oYqjsygRYmRbKuI55rxyxph7cT93ZctL48DZJ4fT4zVIT9kak3S/NtFs0Hyr3TkY # N6ZlDnd10cj8QsnXXtTd9QgT7Ind+3KySv7sr4a/gLO/N39EYpztrMCc/lKfG/Bu # MPDMXBrEtKkjMelcnISwac9QcOb/MAJaepCWtYgcXbEcaBP+/Or8OM0yZPOEk7SA # 3CamE+ou0Ds/c6gnsBw6WDMTJd+sX6sw6+4cMEaWzaWiE12Ryc0gscCDJXjEAYzc # +47PiPijNJ+iPjsos8ZaNnTQHALemgJ4cjolHivsEvAYU1s5cyKjVEgMB1MJN8ib # y19D9L8T9BtG2ukWBxtIXMIt51VZ7B8fXodcYXbyqtV25JZj/k8= # =cJfu # -----END PGP SIGNATURE----- # gpg: Signature made Fri 23 Feb 2024 02:25:44 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2024-02-23powerpc: pmd_move_must_withdraw() is only needed for CONFIG_TRANSPARENT_HUGEPAGEStephen Rothwell
[ Upstream commit 0d555b57ee660d8a871781c0eebf006e855e918d ] The linux-next build of powerpc64 allnoconfig fails with: arch/powerpc/mm/book3s64/pgtable.c:557:5: error: no previous prototype for 'pmd_move_must_withdraw' 557 | int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl, | ^~~~~~~~~~~~~~~~~~~~~~ Caused by commit: c6345dfa6e3e ("Makefile.extrawarn: turn on missing-prototypes globally") Fix it by moving the function definition under CONFIG_TRANSPARENT_HUGEPAGE like the prototype. The function is only called when CONFIG_TRANSPARENT_HUGEPAGE=y. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> [mpe: Flesh out change log from linux-next patch] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231127132809.45c2b398@canb.auug.org.au Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-02-23powerpc/mm: Fix null-pointer dereference in pgtable_cache_addKunwu Chan
[ Upstream commit f46c8a75263f97bda13c739ba1c90aced0d3b071 ] kasprintf() returns a pointer to dynamically allocated memory which can be NULL upon failure. Ensure the allocation was successful by checking the pointer validity. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231204023223.2447523-1-chentao@kylinos.cn Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-04Merge tag 'v5.4.255' into v5.4/standard/baseBruce Ashfield
This is the 5.4.255 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmTvUhUACgkQONu9yGCS # aT4bKA//VvBb7CUEq4FFMv5qig67dKUIqJVfpwLrqaCqVR8B0QonL1M5dcKXywwT # zFqcQNGmgig9TtbYmrLtcpI/v3J3jilY7/an5dWBEPteyZgpkpAwO3M7MinbtIbj # qRkU5qN/zojUMqgWUYRenICeiN4EOVQ64/Q9fhbj2yFBeQWzCFb0eoeF059DocTD # UzN1Ls+cYHvZEDi0VEiapQzYX1JcxMbuWaGDttQLDvjV6FMaExT5mIobDqSF+9MA # MS9GGj3R/Q+NjOi/AXEMfnWGEYPLsX5hgM3ok2hjyneJiw1J6OqxG1JoPJAnDUEH # d3u/tlcWQ0j/QP0iNZBvC9aVC9YBndOoaAny5QINoLGQsbeCbZ34cKs80p76xTBa # Vvl/B2pFu3pGVBk7f37rf/D2v/MTxkDONxwBzG4J6uDViPgpIDK7UExjGDub6gf1 # Ii5HmXvGCNwIk3NnCpdaHUQy3XRI7cz24kvDZsqkalMW6GYwlVNj9gikcW3dfOVY # Jsdufo9fM5N3jXbru3NW61ne024+NxGRd3SnUsYB/saKfUZAxm0S/O34fzQi3wZx # VLXFB85DIY5gkYl2VeycDZzmVkFEaDP4vzDR1gCmMTaiQsyQuD5wma6dUGggdF/2 # fvigMgosamWhHHHByASp9RxYRBwTe7vEdFE4+8gbEa7NxMoBcg8= # =Dhtw # -----END PGP SIGNATURE----- # gpg: Signature made Wed 30 Aug 2023 10:28:37 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-08-30powerpc/kasan: Disable KCOV in KASAN codeBenjamin Gray
[ Upstream commit ccb381e1af1ace292153c88eb1fffa5683d16a20 ] As per the generic KASAN code in mm/kasan, disable KCOV with KCOV_INSTRUMENT := n in the makefile. This fixes a ppc64 boot hang when KCOV and KASAN are enabled. kasan_early_init() gets called before a PACA is initialised, but the KCOV hook expects a valid PACA. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230710044143.146840-1-bgray@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-21Merge tag 'v5.4.253' into v5.4/standard/baseBruce Ashfield
This is the 5.4.253 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmTWBWgACgkQONu9yGCS # aT66Iw//TwAjMECCqJ84moMMA7/fC8QrRiBLWz24f6sVGqMb3vZCiQ91Z4zEZID6 # qV06RRlk08aJqhhllWYE6mqZJZTfmGgjEWjM0OL/bHFgU3TtHC0mR5mCtoUzFTzD # bIZb6mj8egPDgAP55Sn0/Va0jR5Y4Mp2IFdbtu68J4jy/N4aDE1nTljQamMjhoiV # JuUVf5XZsZ+4k6kSF01TIaJCDLjij9aSBbNltC0BrfzVIEj19leBb7x4slu6VGIp # QGkPTySjRw1xRdBUTZ/uJzXqMIqBM0A0x9M9cd97vDNWrp6Qi9G6YeBh6D7X9x++ # zy+Y1CusgH7M/nE/hOFPmgcqfJZfyf1Fa3fIa31+cMKIANg7G2dg+Gd4xxnL0FgA # BSR2oSC5rzUK8X2/nMaduwQNMPQr8Q0vX5+KRnJB964swBvbPLplC5+NpYf0RKHD # +bgkwN7Yxn2JqBWLkoGR9u6Mtyx0UclEVU0wKYAEwph3FLKlbiZjRPJdSa2p6gdd # UZiMgVyTSGOlpbM31fG52RyLoePFxc7vfR/jmyVaYMUPB5xjMi355Rzxcm8VgmIi # DArs/XUHeHeIyHRr6l6xlsx/2ihrENbO8ux9v07/jWMN/tzc5qEKZ1RmLRaaWwf7 # 3A+cTGMpRwznf3DxJoAFRiC6VhezJsa/BUHaTvSYki0OSxOJ/BM= # =Bk55 # -----END PGP SIGNATURE----- # gpg: Signature made Fri 11 Aug 2023 05:54:48 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-08-11powerpc/mm/altmap: Fix altmap boundary checkAneesh Kumar K.V
[ Upstream commit 6722b25712054c0f903b839b8f5088438dd04df3 ] altmap->free includes the entire free space from which altmap blocks can be allocated. So when checking whether the kernel is doing altmap block free, compute the boundary correctly, otherwise memory hotunplug can fail. Fixes: 9ef34630a461 ("powerpc/mm: Fallback to RAM if the altmap is unusable") Signed-off-by: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230724181320.471386-1-aneesh.kumar@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27Merge tag 'v5.4.251' into v5.4/standard/baseBruce Ashfield
This is the 5.4.251 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmTCEMUACgkQONu9yGCS # aT52vhAAr5fuA8n3nANC/iWrnV+tR7PS9+ncqxloumGgIPnFijlCpB7DBoK7KAPw # cs83aMisxfvWkSPuQebqY2xO2dUX03DiySCNta0W81Iw2ndASLnA/OXYn+ZOXMbW # xKYA37d5EmQ+JWIhh3+Gnxjb3Tui6vVEJAgqkC+4FD/sB60VwuGNIKirkYT58402 # NlYExg0Wcgye8Qc50JXH96Dy6opvX84qGnnmz3slfKk7Jykifqh3jm1bSIQrngWs # mUb8cXOkQgMrAWz8IJ4FgHisA0X3B3SklaiEO0ClPWw4nwC9PtpnAxZRxIVf2LDC # eXj0fsJcP6So2b2vDnmfn2V+1bM8jQFuyv6eqhxW6sz4uiQQuZ3GAqdw0UhhfUmL # ExzlCWTzdy2ZP4oN440JvxnYDItCsK263G+6l+LH3owWEbwHYmUh2uZoiC31rIEk # pzXpZYzpFpGweTGtKx0+mW90i8l0lyQojN4pJMUrHgjp7u+bQIY0BkFUTClMH59E # TsArErG8YOUh3cb+JkiTuJfgpv/D1kW//p3t2uJEsZPUHjN9BDsn0rsMftLYZI1C # IKXpi69yYjbSmYAz6gRzi7AmlxRxqM4BEdOOyqHMylyyK5K0EneXqpA1UMT+Fuel # 5KXXVWjPu+C0I5x4MLnbBckJQHVsKY/sUE94ba4OFsTMbCJeNZ8= # =Vm2g # -----END PGP SIGNATURE----- # gpg: Signature made Thu 27 Jul 2023 02:37:57 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-07-27powerpc/mm/dax: Fix the condition when checking if altmap vmemap can ↵Aneesh Kumar K.V
cross-boundary [ Upstream commit c8eebc4a99f15280654f23e914e746c40a516e50 ] Without this fix, the last subsection vmemmap can end up in memory even if the namespace is created with -M mem and has sufficient space in the altmap area. Fixes: cf387d9644d8 ("libnvdimm/altmap: Track namespace boundaries in altmap") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Tested-by: Sachin Sant <sachinp@linux.ibm.com <mailto:sachinp@linux.ibm.com>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230616110826.344417-6-aneesh.kumar@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-06-05Merge tag 'v5.4.244' into v5.4/standard/baseBruce Ashfield
This is the 5.4.244 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmR14ZEACgkQONu9yGCS # aT41Mw/+NyTg/nNT37u5X7l6TeoWkJTTpxJTFM+EIL0L/LZ8d+fPwvXRuSEfUH8X # 7yLBaepbuGdtyMMCmJofxlNwMrx9L9M1xK03s9DnKGxVlkFZbJth/8L2FD/R939z # 7IP06/uYL/YI8ZjJSSEf6bOLqvy0BdqSLRpn9NKK9eChK0aIVQ03TIrS1NarAzuQ # lMD5CwaFqZCz8NaGfdpg01JDfMuvKdCD8dCkYE+bO9U/nQRr1dmKvHNsQMpecDte # F/YXfbcv3CIh7vwfdw8UOFzwhyZWjWHsSWi0wRK8ZGy1ckDr3lZFgYj+jr0K/CWu # mMRiEXUIphqwCb7mdi5doWyLD9ZFyU8Jx249vqWBeuL4Hb+74vqJVf1wKT0wOE8c # F6LyxXkc7lfNIIWojn4MyvxtIu4SPo/NsTd9Qxz7kj4SZHmAJNJihFIEezMUB8Wr # 7VZP8o75PJ4Kx0aKkFY2IyZuC/GJa7VD+9AnCyB93eWfkufzMV/1fdOR3WEukpOg # cqRl2xRcQiRu7I1jkn09Ir6yHjR5zZ12QHT/MNZiapaXmnG/IwHGopkQKUlM3Cwz # rbAg7gLb89mjHbbFq8TO1W7JIelLuejAk/P8tO1Uf9VEa/c0E0I7Q434posf0/Yk # XJdV2V+meOG6qyGkW35yUgentd5+bcSxyaA9D1IarA0EC11UFjU= # =hQuZ # -----END PGP SIGNATURE----- # gpg: Signature made Tue 30 May 2023 07:44:17 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-05-30powerpc/64s/radix: Fix soft dirty trackingMichael Ellerman
commit 66b2ca086210732954a7790d63d35542936fc664 upstream. It was reported that soft dirty tracking doesn't work when using the Radix MMU. The tracking is supposed to work by clearing the soft dirty bit for a mapping and then write protecting the PTE. If/when the page is written to, a page fault occurs and the soft dirty bit is added back via pte_mkdirty(). For example in wp_page_reuse(): entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) update_mmu_cache(vma, vmf->address, vmf->pte); Unfortunately on radix _PAGE_SOFTDIRTY is being dropped by radix__ptep_set_access_flags(), called from ptep_set_access_flags(), meaning the soft dirty bit is not set even though the page has been written to. Fix it by adding _PAGE_SOFTDIRTY to the set of bits that are able to be changed in radix__ptep_set_access_flags(). Fixes: b0b5e9b13047 ("powerpc/mm/radix: Add radix pte #defines") Cc: stable@vger.kernel.org # v4.7+ Reported-by: Dan Horák <dan@danny.cz> Link: https://lore.kernel.org/r/20230511095558.56663a50f86bdc4cd97700b7@danny.cz Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230511114224.977423-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-04-06Merge tag 'v5.4.240' into v5.4/standard/baseBruce Ashfield
This is the 5.4.240 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmQtPbUACgkQONu9yGCS # aT462xAAhgh6J/KB4thj31ULLDPkX3zEuTLKIBlLK617NkKHF9k0XA6oAo9A2Fyy # t/MfXJvjmmL0kxsWqmoir0ZrPMifgdAK5hoxjXfvjWYtlYi3k0CXqXlg4YQ9Xalp # VU3O0RRli3KQxKK3u1PhnMMui7+l3pMELza3pUvyhCxRJx3K9loXkbrFZqdOvXEV # QuZ0ugKaxEwWnwStqIzIAUw+jt/13TwPrVQC6cBjkeOOItw2kNw1SPzrjptfHahG # M8fApzAKEgZPa49gDw95hZLawt4Acf5suITLgktBtzniFbj8c5A7jaYMFnaKVv3/ # 1zUhDu6VYZ5UfLzwYoLnmZ08vWVCTi8r28MJ/f1UdkPlhH9T6blos5RdGB9+4Al8 # 17KmOPSXLjzs36cSJFaj521earJSrcwvhsc/sc0ENk0U3CO1d0JkqZKClD2QRt82 # z4yOlkd8j7SbpMgLdwwKbn0PqlK9YddCH7vXNCeMu9thA+Zjy7Z1zCWzENrh8btt # EcQls3VfHSue9avVhkb5THlhEjY8Pe4/x061YWCYqzamIg5/9xjmYTE8mJdXQVxs # zr2wgDikAfXHM440/yQgCiAYLT+gB7ewef+ubbhWVwMDviu8vTWlPAiLqnR7TUAp # CHvypmojDa6iLVnLGvPmIZTkChGCj0x3u7b5VDBJmlt/DLi8amw= # =Y+Jp # -----END PGP SIGNATURE----- # gpg: Signature made Wed 05 Apr 2023 05:21:57 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-04-05dma-mapping: drop the dev argument to arch_sync_dma_for_*Christoph Hellwig
[ Upstream commit 56e35f9c5b87ec1ae93e483284e189c84388de16 ] These are pure cache maintainance routines, so drop the unused struct device argument. Signed-off-by: Christoph Hellwig <hch@lst.de> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch> Stable-dep-of: ab327f8acdf8 ("mips: bmips: BCM6358: disable RAC flush for TP1") Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-09Merge tag 'v5.4.211' into v5.4/standard/baseBruce Ashfield
This is the 5.4.211 stable release # gpg: Signature made Thu 25 Aug 2022 05:19:04 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-08-25powerpc/ptdump: Fix display of RW pages on FSL_BOOK3EChristophe Leroy
commit dd8de84b57b02ba9c1fe530a6d916c0853f136bd upstream. On FSL_BOOK3E, _PAGE_RW is defined with two bits, one for user and one for supervisor. As soon as one of the two bits is set, the page has to be display as RW. But the way it is implemented today requires both bits to be set in order to display it as RW. Instead of display RW when _PAGE_RW bits are set and R otherwise, reverse the logic and display R when _PAGE_RW bits are all 0 and RW otherwise. This change has no impact on other platforms as _PAGE_RW is a single bit on all of them. Fixes: 8eb07b187000 ("powerpc/mm: Dump linux pagetables") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0c33b96317811edf691e81698aaee8fa45ec3449.1656427391.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-07-05Merge tag 'v5.4.203' into v5.4/standard/baseBruce Ashfield
This is the 5.4.203 stable release # gpg: Signature made Sat 02 Jul 2022 10:29:04 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-07-05Merge tag 'v5.4.200' into v5.4/standard/baseBruce Ashfield
This is the 5.4.200 stable release # gpg: Signature made Wed 22 Jun 2022 08:11:28 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-07-02powerpc/ftrace: Remove ftrace init tramp once kernel init is completeNaveen N. Rao
commit 84ade0a6655bee803d176525ef457175cbf4df22 upstream. Stop using the ftrace trampoline for init section once kernel init is complete. Fixes: 67361cf8071286 ("powerpc/ftrace: Handle large kernel configs") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220516071422.463738-1-naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-06-22powerpc/mm: Switch obsolete dssall to .longAlexey Kardashevskiy
commit d51f86cfd8e378d4907958db77da3074f6dce3ba upstream. The dssall ("Data Stream Stop All") instruction is obsolete altogether with other Data Cache Instructions since ISA 2.03 (year 2006). LLVM IAS does not support it but PPC970 seems to be using it. This switches dssall to .long as there is no much point in fixing LLVM. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211221055904.555763-6-aik@ozlabs.ru [sudip: adjust context] Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-20Merge tag 'v5.4.189' into v5.4/standard/baseBruce Ashfield
This is the 5.4.189 stable release # gpg: Signature made Fri 15 Apr 2022 08:18:47 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-04-15powerpc/kasan: Fix early region not updated correctlyChen Jingwen
commit dd75080aa8409ce10d50fb58981c6b59bf8707d3 upstream. The shadow's page table is not updated when PTE_RPN_SHIFT is 24 and PAGE_SHIFT is 12. It not only causes false positives but also false negative as shown the following text. Fix it by bringing the logic of kasan_early_shadow_page_entry here. 1. False Positive: ================================================================== BUG: KASAN: vmalloc-out-of-bounds in pcpu_alloc+0x508/0xa50 Write of size 16 at addr f57f3be0 by task swapper/0/1 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.15.0-12267-gdebe436e77c7 #1 Call Trace: [c80d1c20] [c07fe7b8] dump_stack_lvl+0x4c/0x6c (unreliable) [c80d1c40] [c02ff668] print_address_description.constprop.0+0x88/0x300 [c80d1c70] [c02ff45c] kasan_report+0x1ec/0x200 [c80d1cb0] [c0300b20] kasan_check_range+0x160/0x2f0 [c80d1cc0] [c03018a4] memset+0x34/0x90 [c80d1ce0] [c0280108] pcpu_alloc+0x508/0xa50 [c80d1d40] [c02fd7bc] __kmem_cache_create+0xfc/0x570 [c80d1d70] [c0283d64] kmem_cache_create_usercopy+0x274/0x3e0 [c80d1db0] [c2036580] init_sd+0xc4/0x1d0 [c80d1de0] [c00044a0] do_one_initcall+0xc0/0x33c [c80d1eb0] [c2001624] kernel_init_freeable+0x2c8/0x384 [c80d1ef0] [c0004b14] kernel_init+0x24/0x170 [c80d1f10] [c001b26c] ret_from_kernel_thread+0x5c/0x64 Memory state around the buggy address: f57f3a80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f57f3b00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 >f57f3b80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 ^ f57f3c00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f57f3c80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 ================================================================== 2. False Negative (with KASAN tests): ================================================================== Before fix: ok 45 - kmalloc_double_kzfree # vmalloc_oob: EXPECTATION FAILED at lib/test_kasan.c:1039 KASAN failure expected in "((volatile char *)area)[3100]", but none occurred not ok 46 - vmalloc_oob not ok 1 - kasan ================================================================== After fix: ok 1 - kasan Fixes: cbd18991e24fe ("powerpc/mm: Fix an Oops in kasan_mmu_init()") Cc: stable@vger.kernel.org # 5.4.x Signed-off-by: Chen Jingwen <chenjingwen6@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211229035226.59159-1-chenjingwen6@huawei.com [chleroy: Backport for 5.4] Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-20Merge tag 'v5.4.133' into v5.4/standard/baseBruce Ashfield
This is the 5.4.133 stable release # gpg: Signature made Mon 19 Jul 2021 02:53:40 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-07-19powerpc/mm: Fix lockup on kernel exec faultChristophe Leroy
commit cd5d5e602f502895e47e18cd46804d6d7014e65c upstream. The powerpc kernel is not prepared to handle exec faults from kernel. Especially, the function is_exec_fault() will return 'false' when an exec fault is taken by kernel, because the check is based on reading current->thread.regs->trap which contains the trap from user. For instance, when provoking a LKDTM EXEC_USERSPACE test, current->thread.regs->trap is set to SYSCALL trap (0xc00), and the fault taken by the kernel is not seen as an exec fault by set_access_flags_filter(). Commit d7df2443cd5f ("powerpc/mm: Fix spurious segfaults on radix with autonuma") made it clear and handled it properly. But later on commit d3ca587404b3 ("powerpc/mm: Fix reporting of kernel execute faults") removed that handling, introducing test based on error_code. And here is the problem, because on the 603 all upper bits of SRR1 get cleared when the TLB instruction miss handler bails out to ISI. Until commit cbd7e6ca0210 ("powerpc/fault: Avoid heavy search_exception_tables() verification"), an exec fault from kernel at a userspace address was indirectly caught by the lack of entry for that address in the exception tables. But after that commit the kernel mainly relies on KUAP or on core mm handling to catch wrong user accesses. Here the access is not wrong, so mm handles it. It is a minor fault because PAGE_EXEC is not set, set_access_flags_filter() should set PAGE_EXEC and voila. But as is_exec_fault() returns false as explained in the beginning, set_access_flags_filter() bails out without setting PAGE_EXEC flag, which leads to a forever minor exec fault. As the kernel is not prepared to handle such exec faults, the thing to do is to fire in bad_kernel_fault() for any exec fault taken by the kernel, as it was prior to commit d3ca587404b3. Fixes: d3ca587404b3 ("powerpc/mm: Fix reporting of kernel execute faults") Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/024bb05105050f704743a0083fe3548702be5706.1625138205.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-17Merge tag 'v5.4.119' into v5.4/standard/baseBruce Ashfield
This is the 5.4.119 stable release # gpg: Signature made Fri 14 May 2021 03:46:11 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-05-14powerpc/64s: Fix pte update for kernel memory on radixJordan Niethe
[ Upstream commit b8b2f37cf632434456182e9002d63cbc4cccc50c ] When adding a PTE a ptesync is needed to order the update of the PTE with subsequent accesses otherwise a spurious fault may be raised. radix__set_pte_at() does not do this for performance gains. For non-kernel memory this is not an issue as any faults of this kind are corrected by the page fault handler. For kernel memory these faults are not handled. The current solution is that there is a ptesync in flush_cache_vmap() which should be called when mapping from the vmalloc region. However, map_kernel_page() does not call flush_cache_vmap(). This is troublesome in particular for code patching with Strict RWX on radix. In do_patch_instruction() the page frame that contains the instruction to be patched is mapped and then immediately patched. With no ordering or synchronization between setting up the PTE and writing to the page it is possible for faults. As the code patching is done using __put_user_asm_goto() the resulting fault is obscured - but using a normal store instead it can be seen: BUG: Unable to handle kernel data access on write at 0xc008000008f24a3c Faulting instruction address: 0xc00000000008bd74 Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV Modules linked in: nop_module(PO+) [last unloaded: nop_module] CPU: 4 PID: 757 Comm: sh Tainted: P O 5.10.0-rc5-01361-ge3c1b78c8440-dirty #43 NIP: c00000000008bd74 LR: c00000000008bd50 CTR: c000000000025810 REGS: c000000016f634a0 TRAP: 0300 Tainted: P O (5.10.0-rc5-01361-ge3c1b78c8440-dirty) MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 44002884 XER: 00000000 CFAR: c00000000007c68c DAR: c008000008f24a3c DSISR: 42000000 IRQMASK: 1 This results in the kind of issue reported here: https://lore.kernel.org/linuxppc-dev/15AC5B0E-A221-4B8C-9039-FA96B8EF7C88@lca.pw/ Chris Riedl suggested a reliable way to reproduce the issue: $ mount -t debugfs none /sys/kernel/debug $ (while true; do echo function > /sys/kernel/debug/tracing/current_tracer ; echo nop > /sys/kernel/debug/tracing/current_tracer ; done) & Turning ftrace on and off does a large amount of code patching which in usually less then 5min will crash giving a trace like: ftrace-powerpc: (____ptrval____): replaced (4b473b11) != old (60000000) ------------[ ftrace bug ]------------ ftrace failed to modify [<c000000000bf8e5c>] napi_busy_loop+0xc/0x390 actual: 11:3b:47:4b Setting ftrace call site to call ftrace function ftrace record flags: 80000001 (1) expected tramp: c00000000006c96c ------------[ cut here ]------------ WARNING: CPU: 4 PID: 809 at kernel/trace/ftrace.c:2065 ftrace_bug+0x28c/0x2e8 Modules linked in: nop_module(PO-) [last unloaded: nop_module] CPU: 4 PID: 809 Comm: sh Tainted: P O 5.10.0-rc5-01360-gf878ccaf250a #1 NIP: c00000000024f334 LR: c00000000024f330 CTR: c0000000001a5af0 REGS: c000000004c8b760 TRAP: 0700 Tainted: P O (5.10.0-rc5-01360-gf878ccaf250a) MSR: 900000000282b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 28008848 XER: 20040000 CFAR: c0000000001a9c98 IRQMASK: 0 GPR00: c00000000024f330 c000000004c8b9f0 c000000002770600 0000000000000022 GPR04: 00000000ffff7fff c000000004c8b6d0 0000000000000027 c0000007fe9bcdd8 GPR08: 0000000000000023 ffffffffffffffd8 0000000000000027 c000000002613118 GPR12: 0000000000008000 c0000007fffdca00 0000000000000000 0000000000000000 GPR16: 0000000023ec37c5 0000000000000000 0000000000000000 0000000000000008 GPR20: c000000004c8bc90 c0000000027a2d20 c000000004c8bcd0 c000000002612fe8 GPR24: 0000000000000038 0000000000000030 0000000000000028 0000000000000020 GPR28: c000000000ff1b68 c000000000bf8e5c c00000000312f700 c000000000fbb9b0 NIP ftrace_bug+0x28c/0x2e8 LR ftrace_bug+0x288/0x2e8 Call Trace: ftrace_bug+0x288/0x2e8 (unreliable) ftrace_modify_all_code+0x168/0x210 arch_ftrace_update_code+0x18/0x30 ftrace_run_update_code+0x44/0xc0 ftrace_startup+0xf8/0x1c0 register_ftrace_function+0x4c/0xc0 function_trace_init+0x80/0xb0 tracing_set_tracer+0x2a4/0x4f0 tracing_set_trace_write+0xd4/0x130 vfs_write+0xf0/0x330 ksys_write+0x84/0x140 system_call_exception+0x14c/0x230 system_call_common+0xf0/0x27c To fix this when updating kernel memory PTEs using ptesync. Fixes: f1cb8f9beba8 ("powerpc/64s/radix: avoid ptesync after set_pte and ptep_set_access_flags") Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Tidy up change log slightly] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210208032957.1232102-1-jniethe5@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-01-07Merge tag 'v5.4.86' into v5.4/standard/baseBruce Ashfield
This is the 5.4.86 stable release # gpg: Signature made Wed 30 Dec 2020 05:52:03 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2020-12-30powerpc/mm: Fix verification of MMU_FTR_TYPE_44xChristophe Leroy
commit 17179aeb9d34cc81e1a4ae3f85e5b12b13a1f8d0 upstream. MMU_FTR_TYPE_44x cannot be checked by cpu_has_feature() Use mmu_has_feature() instead Fixes: 23eb7f560a2a ("powerpc: Convert flush_icache_range & friends to C") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ceede82fadf37f3b8275e61fcf8cf29a3e2ec7fe.1602351011.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-30powerpc/mm: sanity_check_fault() should work for all, not only BOOK3SChristophe Leroy
[ Upstream commit 7ceb40027e19567a0a066e3b380cc034cdd9a124 ] The verification and message introduced by commit 374f3f5979f9 ("powerpc/mm/hash: Handle user access of kernel address gracefully") applies to all platforms, it should not be limited to BOOK3S. Make the BOOK3S version of sanity_check_fault() the one for all, and bail out earlier if not BOOK3S. Fixes: 374f3f5979f9 ("powerpc/mm/hash: Handle user access of kernel address gracefully") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fe199d5af3578d3bf80035d203a94d742a7a28af.1607491748.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-11-02Merge tag 'v5.4.73' into v5.4/standard/baseBruce Ashfield
This is the 5.4.73 stable release Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
2020-10-29powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mmNicholas Piggin
[ Upstream commit a665eec0a22e11cdde708c1c256a465ebe768047 ] Commit 0cef77c7798a7 ("powerpc/64s/radix: flush remote CPUs out of single-threaded mm_cpumask") added a mechanism to trim the mm_cpumask of a process under certain conditions. One of the assumptions is that mm_users would not be incremented via a reference outside the process context with mmget_not_zero() then go on to kthread_use_mm() via that reference. That invariant was broken by io_uring code (see previous sparc64 fix), but I'll point Fixes: to the original powerpc commit because we are changing that assumption going forward, so this will make backports match up. Fix this by no longer relying on that assumption, but by having each CPU check the mm is not being used, and clearing their own bit from the mask only if it hasn't been switched-to by the time the IPI is processed. This relies on commit 38cf307c1f20 ("mm: fix kthread_use_mm() vs TLB invalidate") and ARCH_WANT_IRQS_OFF_ACTIVATE_MM to disable irqs over mm switch sequences. Fixes: 0cef77c7798a7 ("powerpc/64s/radix: flush remote CPUs out of single-threaded mm_cpumask") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Michael Ellerman <mpe@ellerman.id.au> Depends-on: 38cf307c1f20 ("mm: fix kthread_use_mm() vs TLB invalidate") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200914045219.3736466-5-npiggin@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-10-29pseries/drmem: don't cache node id in drmem_lmb structScott Cheloha
[ Upstream commit e5e179aa3a39c818db8fbc2dce8d2cd24adaf657 ] At memory hot-remove time we can retrieve an LMB's nid from its corresponding memory_block. There is no need to store the nid in multiple locations. Note that lmb_to_memblock() uses find_memory_block() to get the corresponding memory_block. As find_memory_block() runs in sub-linear time this approach is negligibly slower than what we do at present. In exchange for this lookup at hot-remove time we no longer need to call memory_add_physaddr_to_nid() during drmem_init() for each LMB. On powerpc, memory_add_physaddr_to_nid() is a linear search, so this spares us an O(n^2) initialization during boot. On systems with many LMBs that initialization overhead is palpable and disruptive. For example, on a box with 249854 LMBs we're seeing drmem_init() take upwards of 30 seconds to complete: [ 53.721639] drmem: initializing drmem v2 [ 80.604346] watchdog: BUG: soft lockup - CPU#65 stuck for 23s! [swapper/0:1] [ 80.604377] Modules linked in: [ 80.604389] CPU: 65 PID: 1 Comm: swapper/0 Not tainted 5.6.0-rc2+ #4 [ 80.604397] NIP: c0000000000a4980 LR: c0000000000a4940 CTR: 0000000000000000 [ 80.604407] REGS: c0002dbff8493830 TRAP: 0901 Not tainted (5.6.0-rc2+) [ 80.604412] MSR: 8000000002009033 <SF,VEC,EE,ME,IR,DR,RI,LE> CR: 44000248 XER: 0000000d [ 80.604431] CFAR: c0000000000a4a38 IRQMASK: 0 [ 80.604431] GPR00: c0000000000a4940 c0002dbff8493ac0 c000000001904400 c0003cfffffede30 [ 80.604431] GPR04: 0000000000000000 c000000000f4095a 000000000000002f 0000000010000000 [ 80.604431] GPR08: c0000bf7ecdb7fb8 c0000bf7ecc2d3c8 0000000000000008 c00c0002fdfb2001 [ 80.604431] GPR12: 0000000000000000 c00000001e8ec200 [ 80.604477] NIP [c0000000000a4980] hot_add_scn_to_nid+0xa0/0x3e0 [ 80.604486] LR [c0000000000a4940] hot_add_scn_to_nid+0x60/0x3e0 [ 80.604492] Call Trace: [ 80.604498] [c0002dbff8493ac0] [c0000000000a4940] hot_add_scn_to_nid+0x60/0x3e0 (unreliable) [ 80.604509] [c0002dbff8493b20] [c000000000087c10] memory_add_physaddr_to_nid+0x20/0x60 [ 80.604521] [c0002dbff8493b40] [c0000000010d4880] drmem_init+0x25c/0x2f0 [ 80.604530] [c0002dbff8493c10] [c000000000010154] do_one_initcall+0x64/0x2c0 [ 80.604540] [c0002dbff8493ce0] [c0000000010c4aa0] kernel_init_freeable+0x2d8/0x3a0 [ 80.604550] [c0002dbff8493db0] [c000000000010824] kernel_init+0x2c/0x148 [ 80.604560] [c0002dbff8493e20] [c00000000000b648] ret_from_kernel_thread+0x5c/0x74 [ 80.604567] Instruction dump: [ 80.604574] 392918e8 e9490000 e90a000a e92a0000 80ea000c 1d080018 3908ffe8 7d094214 [ 80.604586] 7fa94040 419d00dc e9490010 714a0088 <2faa0008> 409e00ac e9490000 7fbe5040 [ 89.047390] drmem: 249854 LMB(s) With a patched kernel on the same machine we're no longer seeing the soft lockup. drmem_init() now completes in negligible time, even when the LMB count is large. Fixes: b2d3b5ee66f2 ("powerpc/pseries: Track LMB nid instead of using device tree") Signed-off-by: Scott Cheloha <cheloha@linux.ibm.com> Reviewed-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200811015115.63677-1-cheloha@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-10-05Merge tag 'v5.4.69' into v5.4/standard/baseBruce Ashfield
This is the 5.4.69 stable release # gpg: Signature made Thu 01 Oct 2020 07:19:35 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2020-10-01powerpc/book3s64: Fix error handling in mm_iommu_do_alloc()Alexey Kardashevskiy
[ Upstream commit c4b78169e3667413184c9a20e11b5832288a109f ] The last jump to free_exit in mm_iommu_do_alloc() happens after page pointers in struct mm_iommu_table_group_mem_t were already converted to physical addresses. Thus calling put_page() on these physical addresses will likely crash. This moves the loop which calculates the pageshift and converts page struct pointers to physical addresses later after the point when we cannot fail; thus eliminating the need to convert pointers back. Fixes: eb9d7a62c386 ("powerpc/mm_iommu: Fix potential deadlock") Reported-by: Jan Kara <jack@suse.cz> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191223060351.26359-1-aik@ozlabs.ru Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-23Merge tag 'v5.4.67' into v5.4/standard/baseBruce Ashfield
This is the 5.4.67 stable release Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
2020-09-23powerpc/book3s64/radix: Fix boot failure with large amount of guest memoryAneesh Kumar K.V
[ Upstream commit 103a8542cb35b5130f732d00b0419a594ba1b517 ] If the hypervisor doesn't support hugepages, the kernel ends up allocating a large number of page table pages. The early page table allocation was wrongly setting the max memblock limit to ppc64_rma_size with radix translation which resulted in boot failure as shown below. Kernel panic - not syncing: early_alloc_pgtable: Failed to allocate 16777216 bytes align=0x1000000 nid=-1 from=0x0000000000000000 max_addr=0xffffffffffffffff CPU: 0 PID: 0 Comm: swapper Not tainted 5.8.0-24.9-default+ #2 Call Trace: [c0000000016f3d00] [c0000000007c6470] dump_stack+0xc4/0x114 (unreliable) [c0000000016f3d40] [c00000000014c78c] panic+0x164/0x418 [c0000000016f3dd0] [c000000000098890] early_alloc_pgtable+0xe0/0xec [c0000000016f3e60] [c0000000010a5440] radix__early_init_mmu+0x360/0x4b4 [c0000000016f3ef0] [c000000001099bac] early_init_mmu+0x1c/0x3c [c0000000016f3f10] [c00000000109a320] early_setup+0x134/0x170 This was because the kernel was checking for the radix feature before we enable the feature via mmu_features. This resulted in the kernel using hash restrictions on radix. Rework the early init code such that the kernel boot with memblock restrictions as imposed by hash. At that point, the kernel still hasn't finalized the translation the kernel will end up using. We have three different ways of detecting radix. 1. dt_cpu_ftrs_scan -> used only in case of PowerNV 2. ibm,pa-features -> Used when we don't use cpu_dt_ftr_scan 3. CAS -> Where we negotiate with hypervisor about the supported translation. We look at 1 or 2 early in the boot and after that, we look at the CAS vector to finalize the translation the kernel will use. We also support a kernel command line option (disable_radix) to switch to hash. Update the memblock limit after mmu_early_init_devtree() if the kernel is going to use radix translation. This forces some of the memblock allocations we do before mmu_early_init_devtree() to be within the RMA limit. Fixes: 2bfd65e45e87 ("powerpc/mm/radix: Add radix callbacks for early init routines") Reported-by: Shirisha Ganta <shiganta@in.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200828100852.426575-1-aneesh.kumar@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-25Merge tag 'v5.4.60' into v5.4/standard/baseBruce Ashfield
This is the 5.4.60 stable release # gpg: Signature made Fri 21 Aug 2020 07:05:54 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2020-08-21powerpc: Allow 4224 bytes of stack expansion for the signal frameMichael Ellerman
commit 63dee5df43a31f3844efabc58972f0a206ca4534 upstream. We have powerpc specific logic in our page fault handling to decide if an access to an unmapped address below the stack pointer should expand the stack VMA. The code was originally added in 2004 "ported from 2.4". The rough logic is that the stack is allowed to grow to 1MB with no extra checking. Over 1MB the access must be within 2048 bytes of the stack pointer, or be from a user instruction that updates the stack pointer. The 2048 byte allowance below the stack pointer is there to cover the 288 byte "red zone" as well as the "about 1.5kB" needed by the signal delivery code. Unfortunately since then the signal frame has expanded, and is now 4224 bytes on 64-bit kernels with transactional memory enabled. This means if a process has consumed more than 1MB of stack, and its stack pointer lies less than 4224 bytes from the next page boundary, signal delivery will fault when trying to expand the stack and the process will see a SEGV. The total size of the signal frame is the size of struct rt_sigframe (which includes the red zone) plus __SIGNAL_FRAMESIZE (128 bytes on 64-bit). The 2048 byte allowance was correct until 2008 as the signal frame was: struct rt_sigframe { struct ucontext uc; /* 0 1440 */ /* --- cacheline 11 boundary (1408 bytes) was 32 bytes ago --- */ long unsigned int _unused[2]; /* 1440 16 */ unsigned int tramp[6]; /* 1456 24 */ struct siginfo * pinfo; /* 1480 8 */ void * puc; /* 1488 8 */ struct siginfo info; /* 1496 128 */ /* --- cacheline 12 boundary (1536 bytes) was 88 bytes ago --- */ char abigap[288]; /* 1624 288 */ /* size: 1920, cachelines: 15, members: 7 */ /* padding: 8 */ }; 1920 + 128 = 2048 Then in commit ce48b2100785 ("powerpc: Add VSX context save/restore, ptrace and signal support") (Jul 2008) the signal frame expanded to 2304 bytes: struct rt_sigframe { struct ucontext uc; /* 0 1696 */ <-- /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ long unsigned int _unused[2]; /* 1696 16 */ unsigned int tramp[6]; /* 1712 24 */ struct siginfo * pinfo; /* 1736 8 */ void * puc; /* 1744 8 */ struct siginfo info; /* 1752 128 */ /* --- cacheline 14 boundary (1792 bytes) was 88 bytes ago --- */ char abigap[288]; /* 1880 288 */ /* size: 2176, cachelines: 17, members: 7 */ /* padding: 8 */ }; 2176 + 128 = 2304 At this point we should have been exposed to the bug, though as far as I know it was never reported. I no longer have a system old enough to easily test on. Then in 2010 commit 320b2b8de126 ("mm: keep a guard page below a grow-down stack segment") caused our stack expansion code to never trigger, as there was always a VMA found for a write up to PAGE_SIZE below r1. That meant the bug was hidden as we continued to expand the signal frame in commit 2b0a576d15e0 ("powerpc: Add new transactional memory state to the signal context") (Feb 2013): struct rt_sigframe { struct ucontext uc; /* 0 1696 */ /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ struct ucontext uc_transact; /* 1696 1696 */ <-- /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */ long unsigned int _unused[2]; /* 3392 16 */ unsigned int tramp[6]; /* 3408 24 */ struct siginfo * pinfo; /* 3432 8 */ void * puc; /* 3440 8 */ struct siginfo info; /* 3448 128 */ /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */ char abigap[288]; /* 3576 288 */ /* size: 3872, cachelines: 31, members: 8 */ /* padding: 8 */ /* last cacheline: 32 bytes */ }; 3872 + 128 = 4000 And commit 573ebfa6601f ("powerpc: Increase stack redzone for 64-bit userspace to 512 bytes") (Feb 2014): struct rt_sigframe { struct ucontext uc; /* 0 1696 */ /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ struct ucontext uc_transact; /* 1696 1696 */ /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */ long unsigned int _unused[2]; /* 3392 16 */ unsigned int tramp[6]; /* 3408 24 */ struct siginfo * pinfo; /* 3432 8 */ void * puc; /* 3440 8 */ struct siginfo info; /* 3448 128 */ /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */ char abigap[512]; /* 3576 512 */ <-- /* size: 4096, cachelines: 32, members: 8 */ /* padding: 8 */ }; 4096 + 128 = 4224 Then finally in 2017, commit 1be7107fbe18 ("mm: larger stack guard gap, between vmas") exposed us to the existing bug, because it changed the stack VMA to be the correct/real size, meaning our stack expansion code is now triggered. Fix it by increasing the allowance to 4224 bytes. Hard-coding 4224 is obviously unsafe against future expansions of the signal frame in the same way as the existing code. We can't easily use sizeof() because the signal frame structure is not in a header. We will either fix that, or rip out all the custom stack expansion checking logic entirely. Fixes: ce48b2100785 ("powerpc: Add VSX context save/restore, ptrace and signal support") Cc: stable@vger.kernel.org # v2.6.27+ Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Tested-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724092528.1578671-2-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21powerpc/ptdump: Fix build failure in hashpagetable.cChristophe Leroy
commit 7c466b0807960edc13e4b855be85ea765df9a6cd upstream. H_SUCCESS is only defined when CONFIG_PPC_PSERIES is defined. != H_SUCCESS means != 0. Modify the test accordingly. Fixes: 65e701b2d2a8 ("powerpc/ptdump: drop non vital #ifdefs") Cc: stable@vger.kernel.org Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/795158fc1d2b3dff3bf7347881947a887ea9391a.1592227105.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-19Merge tag 'v5.4.59' into v5.4/standard/baseBruce Ashfield
This is the 5.4.59 stable release # gpg: Signature made Wed 19 Aug 2020 02:22:52 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2020-08-19powerpc/book3s64/pkeys: Use PVR check instead of cpu featureAneesh Kumar K.V
[ Upstream commit d79e7a5f26f1d179cbb915a8bf2469b6d7431c29 ] We are wrongly using CPU_FTRS_POWER8 to check for P8 support. Instead, we should use PVR value. Now considering we are using CPU_FTRS_POWER8, that implies we returned true for P9 with older firmware. Keep the same behavior by checking for P9 PVR value. Fixes: cf43d3b26452 ("powerpc: Enable pkey subsystem") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200709032946.881753-2-aneesh.kumar@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-13Merge tag 'v5.4.58' into v5.4/standard/baseBruce Ashfield
This is the 5.4.58 stable release # gpg: Signature made Tue 11 Aug 2020 09:34:48 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2020-08-11Revert "powerpc/kasan: Fix shadow pages allocation failure"Christophe Leroy
commit b506923ee44ae87fc9f4de16b53feb313623e146 upstream. This reverts commit d2a91cef9bbdeb87b7449fdab1a6be6000930210. This commit moved too much work in kasan_init(). The allocation of shadow pages has to be moved for the reason explained in that patch, but the allocation of page tables still need to be done before switching to the final hash table. First revert the incorrect commit, following patch redoes it properly. Fixes: d2a91cef9bbd ("powerpc/kasan: Fix shadow pages allocation failure") Cc: stable@vger.kernel.org Reported-by: Erhard F. <erhard_f@mailbox.org> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://bugzilla.kernel.org/show_bug.cgi?id=208181 Link: https://lore.kernel.org/r/3667deb0911affbf999b99f87c31c77d5e870cd2.1593690707.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-28Merge tag 'v5.4.53' into v5.4/standard/baseBruce Ashfield
This is the 5.4.53 stable release # gpg: Signature made Wed 22 Jul 2020 03:33:18 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2020-07-22powerpc/book3s64/pkeys: Fix pkey_access_permitted() for execute disable pkeyAneesh Kumar K.V
commit 192b6a780598976feb7321ff007754f8511a4129 upstream. Even if the IAMR value denies execute access, the current code returns true from pkey_access_permitted() for an execute permission check, if the AMR read pkey bit is cleared. This results in repeated page fault loop with a test like below: #define _GNU_SOURCE #include <errno.h> #include <stdio.h> #include <stdlib.h> #include <signal.h> #include <inttypes.h> #include <assert.h> #include <malloc.h> #include <unistd.h> #include <pthread.h> #include <sys/mman.h> #ifdef SYS_pkey_mprotect #undef SYS_pkey_mprotect #endif #ifdef SYS_pkey_alloc #undef SYS_pkey_alloc #endif #ifdef SYS_pkey_free #undef SYS_pkey_free #endif #undef PKEY_DISABLE_EXECUTE #define PKEY_DISABLE_EXECUTE 0x4 #define SYS_pkey_mprotect 386 #define SYS_pkey_alloc 384 #define SYS_pkey_free 385 #define PPC_INST_NOP 0x60000000 #define PPC_INST_BLR 0x4e800020 #define PROT_RWX (PROT_READ | PROT_WRITE | PROT_EXEC) static int sys_pkey_mprotect(void *addr, size_t len, int prot, int pkey) { return syscall(SYS_pkey_mprotect, addr, len, prot, pkey); } static int sys_pkey_alloc(unsigned long flags, unsigned long access_rights) { return syscall(SYS_pkey_alloc, flags, access_rights); } static int sys_pkey_free(int pkey) { return syscall(SYS_pkey_free, pkey); } static void do_execute(void *region) { /* jump to region */ asm volatile( "mtctr %0;" "bctrl" : : "r"(region) : "ctr", "lr"); } static void do_protect(void *region) { size_t pgsize; int i, pkey; pgsize = getpagesize(); pkey = sys_pkey_alloc(0, PKEY_DISABLE_EXECUTE); assert (pkey > 0); /* perform mprotect */ assert(!sys_pkey_mprotect(region, pgsize, PROT_RWX, pkey)); do_execute(region); /* free pkey */ assert(!sys_pkey_free(pkey)); } int main(int argc, char **argv) { size_t pgsize, numinsns; unsigned int *region; int i; /* allocate memory region to protect */ pgsize = getpagesize(); region = memalign(pgsize, pgsize); assert(region != NULL); assert(!mprotect(region, pgsize, PROT_RWX)); /* fill page with NOPs with a BLR at the end */ numinsns = pgsize / sizeof(region[0]); for (i = 0; i < numinsns - 1; i++) region[i] = PPC_INST_NOP; region[i] = PPC_INST_BLR; do_protect(region); return EXIT_SUCCESS; } The fix is to only check the IAMR for an execute check, the AMR value is not relevant. Fixes: f2407ef3ba22 ("powerpc: helper to validate key-access permissions of a pte") Cc: stable@vger.kernel.org # v4.16+ Reported-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Add detail to change log, tweak wording & formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200712132047.1038594-1-aneesh.kumar@linux.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-26Merge tag 'v5.4.49' into v5.4/standard/baseBruce Ashfield
This is the 5.4.49 stable release # gpg: Signature made Wed 24 Jun 2020 11:50:53 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2020-06-26Merge tag 'v5.4.48' into v5.4/standard/baseBruce Ashfield
This is the 5.4.48 stable release # gpg: Signature made Mon 22 Jun 2020 03:31:27 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2020-06-24powerpc/32s: Don't warn when mapping RO data ROX.Christophe Leroy
[ Upstream commit 4b19f96a81bceaf0bcf44d79c0855c61158065ec ] Mapping RO data as ROX is not an issue since that data cannot be modified to introduce an exploit. PPC64 accepts to have RO data mapped ROX, as a trade off between kernel size and strictness of protection. On PPC32, kernel size is even more critical as amount of memory is usually small. Depending on the number of available IBATs, the last IBATs might overflow the end of text. Only warn if it crosses the end of RO data. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6499f8eeb2a36330e5c9fc1cee9a79374875bd54.1589866984.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-24powerpc/ptdump: Add _PAGE_COHERENT flagChristophe Leroy
[ Upstream commit 3af4786eb429b2df76cbd7ce3bae21467ac3e4fb ] For platforms using shared.c (4xx, Book3e, Book3s/32), also handle the _PAGE_COHERENT flag which corresponds to the M bit of the WIMG flags. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Make it more verbose, use "coherent" rather than "m"] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/324c3d860717e8e91fca3bb6c0f8b23e1644a404.1589866984.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-22powerpc/kasan: Fix shadow pages allocation failureChristophe Leroy
commit d2a91cef9bbdeb87b7449fdab1a6be6000930210 upstream. Doing kasan pages allocation in MMU_init is too early, kernel doesn't have access yet to the entire memory space and memblock_alloc() fails when the kernel is a bit big. Do it from kasan_init() instead. Fixes: 2edb16efc899 ("powerpc/32: Add KASAN support") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c24163ee5d5f8cdf52fefa45055ceb35435b8f15.1589866984.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-22powerpc/mm: Fix conditions to perform MMU specific management by blocks on ↵Christophe Leroy
PPC32. commit 4e3319c23a66dabfd6c35f4d2633d64d99b68096 upstream. Setting init mem to NX shall depend on sinittext being mapped by block, not on stext being mapped by block. Setting text and rodata to RO shall depend on stext being mapped by block, not on sinittext being mapped by block. Fixes: 63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7d565fb8f51b18a3d98445a830b2f6548cb2da2a.1589866984.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>