summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)Author
2020-04-23f2fs: fix NULL pointer dereference in f2fs_verity_work()Chao Yu
[ Upstream commit 79bbefb19f1359fb2cbd144d5a054649e7e583be ] If both compression and fsverity feature is on, generic/572 will report below NULL pointer dereference bug. BUG: kernel NULL pointer dereference, address: 0000000000000018 RIP: 0010:f2fs_verity_work+0x60/0x90 [f2fs] #PF: supervisor read access in kernel mode Workqueue: fsverity_read_queue f2fs_verity_work [f2fs] RIP: 0010:f2fs_verity_work+0x60/0x90 [f2fs] Call Trace: process_one_work+0x16c/0x3f0 worker_thread+0x4c/0x440 ? rescuer_thread+0x350/0x350 kthread+0xf8/0x130 ? kthread_unpark+0x70/0x70 ret_from_fork+0x35/0x40 There are two issue in f2fs_verity_work(): - it needs to traverse and verify all pages in bio. - if pages in bio belong to non-compressed cluster, accessing decompress IO context stored in page private will cause NULL pointer dereference. Fix them. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23f2fs: fix potential .flags overflow on 32bit architectureChao Yu
[ Upstream commit 7653b9d87516ed65e112d2273c65eca6f97d0a27 ] f2fs_inode_info.flags is unsigned long variable, it has 32 bits in 32bit architecture, since we introduced FI_MMAP_FILE flag when we support data compression, we may access memory cross the border of .flags field, corrupting .i_sem field, result in below deadlock. To fix this issue, let's expand .flags as an array to grab enough space to store new flags. Call Trace: __schedule+0x8d0/0x13fc ? mark_held_locks+0xac/0x100 schedule+0xcc/0x260 rwsem_down_write_slowpath+0x3ab/0x65d down_write+0xc7/0xe0 f2fs_drop_nlink+0x3d/0x600 [f2fs] f2fs_delete_inline_entry+0x300/0x440 [f2fs] f2fs_delete_entry+0x3a1/0x7f0 [f2fs] f2fs_unlink+0x500/0x790 [f2fs] vfs_unlink+0x211/0x490 do_unlinkat+0x483/0x520 sys_unlink+0x4a/0x70 do_fast_syscall_32+0x12b/0x683 entry_SYSENTER_32+0xaa/0x102 Fixes: 4c8ff7095bef ("f2fs: support data compression") Tested-by: Ondrej Jirman <megous@megous.com> Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23f2fs: compress: fix to call missing destroy_compress_ctx()Chao Yu
[ Upstream commit 09ff48011e220e2b4f1d9ce2f472ecb63645cbfc ] Otherwise, it will cause memory leak. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23csky: Fixup get wrong psr value from phyical regGuo Ren
[ Upstream commit 9c0e343d7654a329d1f9b53d253cbf7fb6eff85d ] We should get psr value from regs->psr in stack, not directly get it from phyiscal register then save the vector number in tsk->trap_no. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23ACPI: Update Tiger Lake ACPI device IDsGayatri Kammela
[ Upstream commit b62c770fee699a137359e1f1da9bf14a7f348567 ] Tiger Lake's new unique ACPI device IDs for DPTF and fan drivers are not valid as the IDs are missing 'C'. Fix the IDs by updating them. After the update, the new IDs should now look like INT1047 --> INTC1047 INT1040 --> INTC1040 INT1043 --> INTC1043 INT1044 --> INTC1044 Fixes: 55cfe6a5c582 ("ACPI: DPTF: Add Tiger Lake ACPI device IDs") Fixes: c248dfe7e0ca ("ACPI: fan: Add Tiger Lake ACPI device ID") Suggested-by: Srinivas Pandruvada <srinivas.pandruvada@intel.com> Signed-off-by: Gayatri Kammela <gayatri.kammela@intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23NFS: Fix memory leaks in nfs_pageio_stop_mirroring()Trond Myklebust
[ Upstream commit 862f35c94730c9270833f3ad05bd758a29f204ed ] If we just set the mirror count to 1 without first clearing out the mirrors, we can leak queued up requests. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23drm/amdkfd: kfree the wrong pointerJack Zhang
[ Upstream commit 3148a6a0ef3cf93570f30a477292768f7eb5d3c3 ] Originally, it kfrees the wrong pointer for mem_obj. It would cause memory leak under stress test. Signed-off-by: Jack Zhang <Jack.Zhang1@amd.com> Acked-by: Nirmoy Das <nirmoy.das@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23csky: Fixup cpu speculative execution to IO areaGuo Ren
[ Upstream commit aefd9461d34a1b0a2acad0750c43216c1c27b9d4 ] For the memory size ( > 512MB, < 1GB), the MSA setting is: - SSEG0: PHY_START , PHY_START + 512MB - SSEG1: PHY_START + 512MB, PHY_START + 1GB But the real memory is no more than 1GB, there is a gap between the end size of memory and border of 1GB. CPU could speculatively execute to that gap and if the gap of the bus couldn't respond to the CPU request, then the crash will happen. Now make the setting with: - SSEG0: PHY_START , PHY_START + 512MB (no change) - SSEG1: Disabled (We use highmem to use the memory of 512MB~1GB) We also deprecated zhole_szie[] settings, it's only used by arm style CPUs. All memory gap should use Reserved setting of dts in csky system. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23x86: ACPI: fix CPU hotplug deadlockQian Cai
[ Upstream commit 696ac2e3bf267f5a2b2ed7d34e64131f2287d0ad ] Similar to commit 0266d81e9bf5 ("acpi/processor: Prevent cpu hotplug deadlock") except this is for acpi_processor_ffh_cstate_probe(): "The problem is that the work is scheduled on the current CPU from the hotplug thread associated with that CPU. It's not required to invoke these functions via the workqueue because the hotplug thread runs on the target CPU already. Check whether current is a per cpu thread pinned on the target CPU and invoke the function directly to avoid the workqueue." WARNING: possible circular locking dependency detected ------------------------------------------------------ cpuhp/1/15 is trying to acquire lock: ffffc90003447a28 ((work_completion)(&wfc.work)){+.+.}-{0:0}, at: __flush_work+0x4c6/0x630 but task is already holding lock: ffffffffafa1c0e8 (cpuidle_lock){+.+.}-{3:3}, at: cpuidle_pause_and_lock+0x17/0x20 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x3e/0xc0 irq_calc_affinity_vectors+0x5f/0x91 __pci_enable_msix_range+0x10f/0x9a0 pci_alloc_irq_vectors_affinity+0x13e/0x1f0 pci_alloc_irq_vectors_affinity at drivers/pci/msi.c:1208 pqi_ctrl_init+0x72f/0x1618 [smartpqi] pqi_pci_probe.cold.63+0x882/0x892 [smartpqi] local_pci_probe+0x7a/0xc0 work_for_cpu_fn+0x2e/0x50 process_one_work+0x57e/0xb90 worker_thread+0x363/0x5b0 kthread+0x1f4/0x220 ret_from_fork+0x27/0x50 -> #0 ((work_completion)(&wfc.work)){+.+.}-{0:0}: __lock_acquire+0x2244/0x32a0 lock_acquire+0x1a2/0x680 __flush_work+0x4e6/0x630 work_on_cpu+0x114/0x160 acpi_processor_ffh_cstate_probe+0x129/0x250 acpi_processor_evaluate_cst+0x4c8/0x580 acpi_processor_get_power_info+0x86/0x740 acpi_processor_hotplug+0xc3/0x140 acpi_soft_cpu_online+0x102/0x1d0 cpuhp_invoke_callback+0x197/0x1120 cpuhp_thread_fun+0x252/0x2f0 smpboot_thread_fn+0x255/0x440 kthread+0x1f4/0x220 ret_from_fork+0x27/0x50 other info that might help us debug this: Chain exists of: (work_completion)(&wfc.work) --> cpuhp_state-up --> cpuidle_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(cpuidle_lock); lock(cpuhp_state-up); lock(cpuidle_lock); lock((work_completion)(&wfc.work)); *** DEADLOCK *** 3 locks held by cpuhp/1/15: #0: ffffffffaf51ab10 (cpu_hotplug_lock){++++}-{0:0}, at: cpuhp_thread_fun+0x69/0x2f0 #1: ffffffffaf51ad40 (cpuhp_state-up){+.+.}-{0:0}, at: cpuhp_thread_fun+0x69/0x2f0 #2: ffffffffafa1c0e8 (cpuidle_lock){+.+.}-{3:3}, at: cpuidle_pause_and_lock+0x17/0x20 Call Trace: dump_stack+0xa0/0xea print_circular_bug.cold.52+0x147/0x14c check_noncircular+0x295/0x2d0 __lock_acquire+0x2244/0x32a0 lock_acquire+0x1a2/0x680 __flush_work+0x4e6/0x630 work_on_cpu+0x114/0x160 acpi_processor_ffh_cstate_probe+0x129/0x250 acpi_processor_evaluate_cst+0x4c8/0x580 acpi_processor_get_power_info+0x86/0x740 acpi_processor_hotplug+0xc3/0x140 acpi_soft_cpu_online+0x102/0x1d0 cpuhp_invoke_callback+0x197/0x1120 cpuhp_thread_fun+0x252/0x2f0 smpboot_thread_fn+0x255/0x440 kthread+0x1f4/0x220 ret_from_fork+0x27/0x50 Signed-off-by: Qian Cai <cai@lca.pw> Tested-by: Borislav Petkov <bp@suse.de> [ rjw: Subject ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23leds: core: Fix warning message when init_dataRicardo Ribalda Delgado
[ Upstream commit 64ed6588c2ea618d3f9ca9d8b365ae4c19f76225 ] The warning message when a led is renamed due to name collition can fail to show proper original name if init_data is used. Eg: [ 9.073996] leds-gpio a0040000.leds_0: Led (null) renamed to red_led_1 due to name collision Fixes: bb4e9af0348d ("leds: core: Add support for composing LED class device names") Signed-off-by: Ricardo Ribalda Delgado <ribalda@kernel.org> Acked-by: Jacek Anaszewski <jacek.anaszewski@gmail.com> Signed-off-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23drm/nouveau: workaround runpm fail by disabling PCI power management on ↵Karol Herbst
certain intel bridges [ Upstream commit 434fdb51513bf3057ac144d152e6f2f2b509e857 ] Fixes the infamous 'runtime PM' bug many users are facing on Laptops with Nvidia Pascal GPUs by skipping said PCI power state changes on the GPU. Depending on the used kernel there might be messages like those in demsg: "nouveau 0000:01:00.0: Refused to change power state, currently in D3" "nouveau 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)" followed by backtraces of kernel crashes or timeouts within nouveau. It's still unkown why this issue exists, but this is a reliable workaround and solves a very annoying issue for user having to choose between a crashing kernel or higher power consumption of their Laptops. Signed-off-by: Karol Herbst <kherbst@redhat.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Lyude Paul <lyude@redhat.com> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Mika Westerberg <mika.westerberg@intel.com> Cc: linux-pci@vger.kernel.org Cc: linux-pm@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: nouveau@lists.freedesktop.org Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=205623 Signed-off-by: Ben Skeggs <bskeggs@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23KVM: s390: vsie: Fix possible race when shadowing region 3 tablesDavid Hildenbrand
[ Upstream commit 1493e0f944f3c319d11e067c185c904d01c17ae5 ] We have to properly retry again by returning -EINVAL immediately in case somebody else instantiated the table concurrently. We missed to add the goto in this function only. The code now matches the other, similar shadowing functions. We are overwriting an existing region 2 table entry. All allocated pages are added to the crst_list to be freed later, so they are not lost forever. However, when unshadowing the region 2 table, we wouldn't trigger unshadowing of the original shadowed region 3 table that we replaced. It would get unshadowed when the original region 3 table is modified. As it's not connected to the page table hierarchy anymore, it's not going to get used anymore. However, for a limited time, this page table will stick around, so it's in some sense a temporary memory leak. Identified by manual code inspection. I don't think this classifies as stable material. Fixes: 998f637cc4b9 ("s390/mm: avoid races on region/segment/page table shadowing") Signed-off-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20200403153050.20569-4-david@redhat.com Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23compiler.h: fix error in BUILD_BUG_ON() reportingVegard Nossum
[ Upstream commit af9c5d2e3b355854ff0e4acfbfbfadcd5198a349 ] compiletime_assert() uses __LINE__ to create a unique function name. This means that if you have more than one BUILD_BUG_ON() in the same source line (which can happen if they appear e.g. in a macro), then the error message from the compiler might output the wrong condition. For this source file: #include <linux/build_bug.h> #define macro() \ BUILD_BUG_ON(1); \ BUILD_BUG_ON(0); void foo() { macro(); } gcc would output: ./include/linux/compiler.h:350:38: error: call to `__compiletime_assert_9' declared with attribute error: BUILD_BUG_ON failed: 0 _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__) However, it was not the BUILD_BUG_ON(0) that failed, so it should say 1 instead of 0. With this patch, we use __COUNTER__ instead of __LINE__, so each BUILD_BUG_ON() gets a different function name and the correct condition is printed: ./include/linux/compiler.h:350:38: error: call to `__compiletime_assert_0' declared with attribute error: BUILD_BUG_ON failed: 1 _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Daniel Santos <daniel.santos@pobox.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Ian Abbott <abbotti@mev.co.uk> Cc: Joe Perches <joe@perches.com> Link: http://lkml.kernel.org/r/20200331112637.25047-1-vegard.nossum@oracle.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23percpu_counter: fix a data race at vm_committed_asQian Cai
[ Upstream commit 7e2345200262e4a6056580f0231cccdaffc825f3 ] "vm_committed_as.count" could be accessed concurrently as reported by KCSAN, BUG: KCSAN: data-race in __vm_enough_memory / percpu_counter_add_batch write to 0xffffffff9451c538 of 8 bytes by task 65879 on cpu 35: percpu_counter_add_batch+0x83/0xd0 percpu_counter_add_batch at lib/percpu_counter.c:91 __vm_enough_memory+0xb9/0x260 dup_mm+0x3a4/0x8f0 copy_process+0x2458/0x3240 _do_fork+0xaa/0x9f0 __do_sys_clone+0x125/0x160 __x64_sys_clone+0x70/0x90 do_syscall_64+0x91/0xb05 entry_SYSCALL_64_after_hwframe+0x49/0xbe read to 0xffffffff9451c538 of 8 bytes by task 66773 on cpu 19: __vm_enough_memory+0x199/0x260 percpu_counter_read_positive at include/linux/percpu_counter.h:81 (inlined by) __vm_enough_memory at mm/util.c:839 mmap_region+0x1b2/0xa10 do_mmap+0x45c/0x700 vm_mmap_pgoff+0xc0/0x130 ksys_mmap_pgoff+0x6e/0x300 __x64_sys_mmap+0x33/0x40 do_syscall_64+0x91/0xb05 entry_SYSCALL_64_after_hwframe+0x49/0xbe The read is outside percpu_counter::lock critical section which results in a data race. Fix it by adding a READ_ONCE() in percpu_counter_read_positive() which could also service as the existing compiler memory barrier. Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Marco Elver <elver@google.com> Link: http://lkml.kernel.org/r/1582302724-2804-1-git-send-email-cai@lca.pw Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23include/linux/swapops.h: correct guards for non_swap_entry()Steven Price
[ Upstream commit 3f3673d7d324d872d9d8ddb73b3e5e47fbf12e0d ] If CONFIG_DEVICE_PRIVATE is defined, but neither CONFIG_MEMORY_FAILURE nor CONFIG_MIGRATION, then non_swap_entry() will return 0, meaning that the condition (non_swap_entry(entry) && is_device_private_entry(entry)) in zap_pte_range() will never be true even if the entry is a device private one. Equally any other code depending on non_swap_entry() will not function as expected. I originally spotted this just by looking at the code, I haven't actually observed any problems. Looking a bit more closely it appears that actually this situation (currently at least) cannot occur: DEVICE_PRIVATE depends on ZONE_DEVICE ZONE_DEVICE depends on MEMORY_HOTREMOVE MEMORY_HOTREMOVE depends on MIGRATION Fixes: 5042db43cc26 ("mm/ZONE_DEVICE: new type of ZONE_DEVICE for unaddressable memory") Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Jérôme Glisse <jglisse@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: John Hubbard <jhubbard@nvidia.com> Link: http://lkml.kernel.org/r/20200305130550.22693-1-steven.price@arm.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23drm/nouveau/svm: fix vma range check for migrationRalph Campbell
[ Upstream commit b92103b559c77abc5f8b7bec269230a219c880b7 ] find_vma_intersection(mm, start, end) only guarantees that end is greater than or equal to vma->vm_start but doesn't guarantee that start is greater than or equal to vma->vm_start. The calculation for the intersecting range in nouveau_svmm_bind() isn't accounting for this and can call migrate_vma_setup() with a starting address less than vma->vm_start. This results in migrate_vma_setup() returning -EINVAL for the range instead of nouveau skipping that part of the range and migrating the rest. Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Ben Skeggs <bskeggs@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23drm/nouveau/svm: check for SVM initialized before migratingRalph Campbell
[ Upstream commit 822cab6150d3002952407a8297ff5a0d32bb7b54 ] When migrating system memory to GPU memory, check that SVM has been enabled. Even though most errors can be ignored since migration is a performance optimization, return an error because this is a violation of the API. Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Ben Skeggs <bskeggs@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23macsec: fix NULL dereference in macsec_upd_offload()Davide Caratti
[ Upstream commit aa81700cf2326e288c9ca1fe7b544039617f1fc2 ] macsec_upd_offload() gets the value of MACSEC_OFFLOAD_ATTR_TYPE without checking its presence in the request message, and this causes a NULL dereference. Fix it rejecting any configuration that does not include this attribute. Reported-and-tested-by: syzbot+7022ab7c383875c17eff@syzkaller.appspotmail.com Fixes: dcb780fb2795 ("net: macsec: add nla support for changing the offloading selection") Signed-off-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23mm/hugetlb: fix build failure with HUGETLB_PAGE but not HUGEBTLBFSChristophe Leroy
[ Upstream commit bb297bb2de517e41199185021f043bbc5d75b377 ] When CONFIG_HUGETLB_PAGE is set but not CONFIG_HUGETLBFS, the following build failure is encoutered: In file included from arch/powerpc/mm/fault.c:33:0: include/linux/hugetlb.h: In function 'hstate_inode': include/linux/hugetlb.h:477:9: error: implicit declaration of function 'HUGETLBFS_SB' [-Werror=implicit-function-declaration] return HUGETLBFS_SB(i->i_sb)->hstate; ^ include/linux/hugetlb.h:477:30: error: invalid type argument of '->' (have 'int') return HUGETLBFS_SB(i->i_sb)->hstate; ^ Gate hstate_inode() with CONFIG_HUGETLBFS instead of CONFIG_HUGETLB_PAGE. Fixes: a137e1cc6d6e ("hugetlbfs: per mount huge page sizes") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Baoquan He <bhe@redhat.com> Cc: Nishanth Aravamudan <nacc@us.ibm.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Adam Litke <agl@us.ibm.com> Cc: Andi Kleen <ak@suse.de> Link: http://lkml.kernel.org/r/7e8c3a3c9a587b9cd8a2f146df32a421b961f3a2.1584432148.git.christophe.leroy@c-s.fr Link: https://patchwork.ozlabs.org/patch/1255548/#2386036 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23platform/x86: intel-hid: fix: Update Tiger Lake ACPI device IDGayatri Kammela
[ Upstream commit d5764dc597467664a1a70ab66a2314a011aeccd4 ] Tiger Lake's new unique ACPI device IDs for intel-hid driver is not valid because of missing 'C' in the ID. Fix the ID by updating it. After the update, the new ID should now look like INT1051 --> INTC1051 Fixes: bdd11b654035 ("platform/x86: intel-hid: Add Tiger Lake ACPI device ID") Suggested-by: Srinivas Pandruvada <srinivas.pandruvada@intel.com> Signed-off-by: Gayatri Kammela <gayatri.kammela@intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23dt-bindings: thermal: tsens: Fix nvmem-cell-names schemaRob Herring
[ Upstream commit b9589def9f9af93d9d4c5969c9a6c166f070e36e ] There's a typo 'nvmem-cells-names' in the schema which means the correct 'nvmem-cell-names' in the examples are not checked. The possible values are wrong too both in that the 2nd entry is not specified correctly and the values are just wrong based on the dts files in the kernel. Fixes: a877e768f655 ("dt-bindings: thermal: tsens: Convert over to a yaml schema") Cc: Andy Gross <agross@kernel.org> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Amit Kucheria <amit.kucheria@linaro.org> Cc: Zhang Rui <rui.zhang@intel.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: linux-arm-msm@vger.kernel.org Cc: linux-pm@vger.kernel.org Cc: devicetree@vger.kernel.org Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23drm/amd/display: Don't try hdcp1.4 when content_type is set to type1Bhawanpreet Lakha
[ Upstream commit c2850c125d919efbb3a9ab46410d23912934f585 ] [Why] When content type property is set to 1. We should enable hdcp2.2 and if we cant then stop. Currently the way it works in DC is that if we fail hdcp2, we will try hdcp1 after. [How] Use link config to force disable hdcp1.4 when type1 is set. Signed-off-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23x86/xen: Make the boot CPU idle task reliableMiroslav Benes
[ Upstream commit 2f62f36e62daec43aa7b9633ef7f18e042a80bed ] The unwinder reports the boot CPU idle task's stack on XEN PV as unreliable, which affects at least live patching. There are two reasons for this. First, the task does not follow the x86 convention that its stack starts at the offset right below saved pt_regs. It allows the unwinder to easily detect the end of the stack and verify it. Second, startup_xen() function does not store the return address before jumping to xen_start_kernel() which confuses the unwinder. Amend both issues by moving the starting point of initial stack in startup_xen() and storing the return address before the jump, which is exactly what call instruction does. Signed-off-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23cifs: Allocate encryption header through kmallocLong Li
[ Upstream commit 3946d0d04bb360acca72db5efe9ae8440012d9dc ] When encryption is used, smb2_transform_hdr is defined on the stack and is passed to the transport. This doesn't work with RDMA as the buffer needs to be DMA'ed. Fix it by using kmalloc. Signed-off-by: Long Li <longli@microsoft.com> Signed-off-by: Steve French <stfrench@microsoft.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23um: ubd: Prevent buffer overrun on command completionGabriel Krisman Bertazi
[ Upstream commit 6e682d53fc1ef73a169e2a5300326cb23abb32ee ] On the hypervisor side, when completing commands and the pipe is full, we retry writing only the entries that failed, by offsetting io_req_buffer, but we don't reduce the number of bytes written, which can cause a buffer overrun of io_req_buffer, and write garbage to the pipe. Cc: Martyn Welch <martyn.welch@collabora.com> Signed-off-by: Gabriel Krisman Bertazi <krisman@collabora.com> Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23ext4: do not commit super on read-only bdevEric Sandeen
[ Upstream commit c96e2b8564adfb8ac14469ebc51ddc1bfecb3ae2 ] Under some circumstances we may encounter a filesystem error on a read-only block device, and if we try to save the error info to the superblock and commit it, we'll wind up with a noisy error and backtrace, i.e.: [ 3337.146838] EXT4-fs error (device pmem1p2): ext4_get_journal_inode:4634: comm mount: inode #0: comm mount: iget: illegal inode # ------------[ cut here ]------------ generic_make_request: Trying to write to read-only block-device pmem1p2 (partno 2) WARNING: CPU: 107 PID: 115347 at block/blk-core.c:788 generic_make_request_checks+0x6b4/0x7d0 ... To avoid this, commit the error info in the superblock only if the block device is writable. Reported-by: Ritesh Harjani <riteshh@linux.ibm.com> Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Andreas Dilger <adilger@dilger.ca> Link: https://lore.kernel.org/r/4b6e774d-cc00-3469-7abb-108eb151071a@sandeen.net Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23nfsroot: set tcp as the default transport protocolLiwei Song
[ Upstream commit 89c8023fd46167a41246a56b31d1b3c9a20b6970 ] UDP is disabled by default in commit b24ee6c64ca7 ("NFS: allow deprecation of NFS UDP protocol"), but the default mount options is still udp, change it to tcp to avoid the "Unsupported transport protocol udp" error if no protocol is specified when mount nfs. Fixes: b24ee6c64ca7 ("NFS: allow deprecation of NFS UDP protocol") Signed-off-by: Liwei Song <liwei.song@windriver.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23s390/cpum_sf: Fix wrong page count in error messageThomas Richter
[ Upstream commit 4141b6a5e9f171325effc36a22eb92bf961e7a5c ] When perf record -e SF_CYCLES_BASIC_DIAG runs with very high frequency, the samples arrive faster than the perf process can save them to file. Eventually, for longer running processes, this leads to the siutation where the trace buffers allocated by perf slowly fills up. At one point the auxiliary trace buffer is full and the CPU Measurement sampling facility is turned off. Furthermore a warning is printed to the kernel log buffer: cpum_sf: The AUX buffer with 0 pages for the diagnostic-sampling mode is full The number of allocated pages for the auxiliary trace buffer is shown as zero pages. That is wrong. Fix this by saving the number of allocated pages before entering the work loop in the interrupt handler. When the interrupt handler processes the samples, it may detect the buffer full condition and stop sampling, reducing the buffer size to zero. Print the correct value in the error message: cpum_sf: The AUX buffer with 256 pages for the diagnostic-sampling mode is full Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23powerpc/maple: Fix declaration made after definitionNathan Chancellor
[ Upstream commit af6cf95c4d003fccd6c2ecc99a598fb854b537e7 ] When building ppc64 defconfig, Clang errors (trimmed for brevity): arch/powerpc/platforms/maple/setup.c:365:1: error: attribute declaration must precede definition [-Werror,-Wignored-attributes] machine_device_initcall(maple, maple_cpc925_edac_setup); ^ machine_device_initcall expands to __define_machine_initcall, which in turn has the macro machine_is used in it, which declares mach_##name with an __attribute__((weak)). define_machine actually defines mach_##name, which in this file happens before the declaration, hence the warning. To fix this, move define_machine after machine_device_initcall so that the declaration occurs before the definition, which matches how machine_device_initcall and define_machine work throughout arch/powerpc. While we're here, remove some spaces before tabs. Fixes: 8f101a051ef0 ("edac: cpc925 MC platform device setup") Reported-by: Nick Desaulniers <ndesaulniers@google.com> Suggested-by: Ilie Halip <ilie.halip@gmail.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200323222729.15365-1-natechancellor@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23powerpc/prom_init: Pass the "os-term" message to hypervisorAlexey Kardashevskiy
[ Upstream commit 74bb84e5117146fa73eb9d01305975c53022b3c3 ] The "os-term" RTAS calls has one argument with a message address of OS termination cause. rtas_os_term() already passes it but the recently added prom_init's version of that missed it; it also does not fill args correctly. This passes the message address and initializes the number of arguments. Fixes: 6a9c930bd775 ("powerpc/prom_init: Add the ESM call to prom_init") Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200312074404.87293-1-aik@ozlabs.ru Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23btrfs: add RCU locks around block group initializationMadhuparna Bhowmik
[ Upstream commit 29566c9c773456467933ee22bbca1c2b72a3506c ] The space_info list is normally RCU protected and should be traversed with rcu_read_lock held. There's a warning [29.104756] WARNING: suspicious RCU usage [29.105046] 5.6.0-rc4-next-20200305 #1 Not tainted [29.105231] ----------------------------- [29.105401] fs/btrfs/block-group.c:2011 RCU-list traversed in non-reader section!! pointing out that the locking is missing in btrfs_read_block_groups. However this is not necessary as the list traversal happens at mount time when there's no other thread potentially accessing the list. To fix the warning and for consistency let's add the RCU lock/unlock, the code won't be affected much as it's doing some lightweight operations. Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23hibernate: Allow uswsusp to write to swapDomenico Andreoli
[ Upstream commit 56939e014a6c212b317414faa307029e2e80c3b9 ] It turns out that there is one use case for programs being able to write to swap devices, and that is the userspace hibernation code. Quick fix: disable the S_SWAPFILE check if hibernation is configured. Fixes: dc617f29dbe5 ("vfs: don't allow writes to swap files") Reported-by: Domenico Andreoli <domenico.andreoli@linux.com> Reported-by: Marian Klein <mkleinsoft@gmail.com> Signed-off-by: Domenico Andreoli <domenico.andreoli@linux.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23thermal/drivers/cpufreq_cooling: Fix return of cpufreq_set_cur_stateWilly Wolff
[ Upstream commit ff44f672d74178b3be19d41a169b98b3e391d4ce ] When setting the cooling device current state from userspace via sysfs, the operation fails by returning an -EINVAL. It appears the recent changes with the per-policy frequency QoS introduced a regression as reported by: https://lkml.org/lkml/2020/3/20/599 The function freq_qos_update_request returns 0 or 1 describing update effectiveness, and a negative error code on failure. However, cpufreq_set_cur_state returns 0 on success or an error code otherwise. Consider the QoS update as successful if the function does not return an error. Fixes: 3000ce3c52f8b ("cpufreq: Use per-policy frequency QoS") Signed-off-by: Willy Wolff <willy.mh.wolff.ml@gmail.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/20200321092740.7vvwfxsebcrznydh@macmini.local Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23MIPS: DTS: CI20: add DT node for IR sensorAlex Smith
[ Upstream commit f5e8fcf85a25bac26c32a0000dbab5857ead9113 ] The infrared sensor on the CI20 board is connected to a GPIO and can be operated by using the gpio-ir-recv driver. Add a DT node for the sensor to allow that driver to be used. Signed-off-by: Alex Smith <alex.smith@imgtec.com> Signed-off-by: H. Nikolaus Schaller <hns@goldelico.com> Reviewed-by: Paul Cercueil <paul@crapouillou.net> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23s390/cpuinfo: fix wrong output when CPU0 is offlineAlexander Gordeev
[ Upstream commit 872f27103874a73783aeff2aac2b41a489f67d7c ] /proc/cpuinfo should not print information about CPU 0 when it is offline. Fixes: 281eaa8cb67c ("s390/cpuinfo: simplify locking and skip offline cpus early") Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> [heiko.carstens@de.ibm.com: shortened commit message] Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23f2fs: Add a new CP flag to help fsck fix resize SPO issuesSahitya Tummala
[ Upstream commit c84ef3c5e65ccf99a7a91a4d731ebb5d6331a178 ] Add and set a new CP flag CP_RESIZEFS_FLAG during online resize FS to help fsck fix the metadata mismatch that may happen due to SPO during resize, where SB got updated but CP data couldn't be written yet. fsck errors - Info: CKPT version = 6ed7bccb Wrong user_block_count(2233856) [f2fs_do_mount:3365] Checkpoint is polluted Signed-off-by: Sahitya Tummala <stummala@codeaurora.org> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23f2fs: Fix mount failure due to SPO after a successful online resize FSSahitya Tummala
[ Upstream commit 682756827501dc52593bf490f2d437c65ec9efcb ] Even though online resize is successfully done, a SPO immediately after resize, still causes below error in the next mount. [ 11.294650] F2FS-fs (sda8): Wrong user_block_count: 2233856 [ 11.300272] F2FS-fs (sda8): Failed to get valid F2FS checkpoint This is because after FS metadata is updated in update_fs_metadata() if the SBI_IS_DIRTY is not dirty, then CP will not be done to reflect the new user_block_count. Signed-off-by: Sahitya Tummala <stummala@codeaurora.org> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23f2fs: fix to update f2fs_super_block fields under sb_lockChao Yu
[ Upstream commit a4ba5dfc5c88e49bb03385abfdd28c5a0acfbb54 ] Fields in struct f2fs_super_block should be updated under coverage of sb_lock, fix to adjust update_sb_metadata() for that rule. Fixes: 04f0b2eaa3b3 ("f2fs: ioctl for removing a range from F2FS") Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23NFS: direct.c: Fix memory leak of dreq when nfs_get_lock_context failsMisono Tomohiro
[ Upstream commit 8605cf0e852af3b2c771c18417499dc4ceed03d5 ] When dreq is allocated by nfs_direct_req_alloc(), dreq->kref is initialized to 2. Therefore we need to call nfs_direct_req_release() twice to release the allocated dreq. Usually it is called in nfs_file_direct_{read, write}() and nfs_direct_complete(). However, current code only calls nfs_direct_req_relese() once if nfs_get_lock_context() fails in nfs_file_direct_{read, write}(). So, that case would result in memory leak. Fix this by adding the missing call. Signed-off-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23phy: uniphier-usb3ss: Add Pro5 supportKunihiko Hayashi
[ Upstream commit 9376fa634afc207a3ce99e0957e04948c34d6510 ] Pro5 SoC has same scheme of USB3 ss-phy as Pro4, so the data for Pro5 is equivalent to Pro4. Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23drivers: thermal: tsens: Release device in success pathAmit Kucheria
[ Upstream commit f22a3bf0d2225fba438c46a25d3ab8823585a5e0 ] We don't currently call put_device in case of successfully initialising the device. So we hold the reference and keep the device pinned forever. Allow control to fall through so we can use same code for success and error paths to put_device. As a part of this fixup, change devm_ioremap_resource to act on the same device pointer as that used to allocate regmap memory. That ensures that we are free to release op->dev after examining its resources. Signed-off-by: Amit Kucheria <amit.kucheria@linaro.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/d3996667e9f976bb30e97e301585cb1023be422e.1584015867.git.amit.kucheria@linaro.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23f2fs: fix to show norecovery mount optionChao Yu
[ Upstream commit a9117eca1de6b738e713d2142126db2cfbf6fb36 ] Previously, 'norecovery' mount option will be shown as 'disable_roll_forward', fix to show original option name correctly. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23KVM: PPC: Book3S HV: Fix H_CEDE return code for nested guestsMichael Roth
[ Upstream commit 1f50cc1705350a4697923203fedd7d8fb1087fe2 ] The h_cede_tm kvm-unit-test currently fails when run inside an L1 guest via the guest/nested hypervisor. ./run-tests.sh -v ... TESTNAME=h_cede_tm TIMEOUT=90s ACCEL= ./powerpc/run powerpc/tm.elf -smp 2,threads=2 -machine cap-htm=on -append "h_cede_tm" FAIL h_cede_tm (2 tests, 1 unexpected failures) While the test relates to transactional memory instructions, the actual failure is due to the return code of the H_CEDE hypercall, which is reported as 224 instead of 0. This happens even when no TM instructions are issued. 224 is the value placed in r3 to execute a hypercall for H_CEDE, and r3 is where the caller expects the return code to be placed upon return. In the case of guest running under a nested hypervisor, issuing H_CEDE causes a return from H_ENTER_NESTED. In this case H_CEDE is specially-handled immediately rather than later in kvmppc_pseries_do_hcall() as with most other hcalls, but we forget to set the return code for the caller, hence why kvm-unit-test sees the 224 return code and reports an error. Guest kernels generally don't check the return value of H_CEDE, so that likely explains why this hasn't caused issues outside of kvm-unit-tests so far. Fix this by setting r3 to 0 after we finish processing the H_CEDE. RHBZ: 1778556 Fixes: 4bad77799fed ("KVM: PPC: Book3S HV: Handle hypercalls correctly when nested") Cc: linuxppc-dev@ozlabs.org Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23xfs: fix incorrect test in xfs_alloc_ag_vextent_lastblockDarrick J. Wong
[ Upstream commit 77ca1eed5a7d2bf0905562eb1a15aac76bc19fe4 ] When I lifted the code in xfs_alloc_ag_vextent_lastblock out of a loop, I forgot to convert all the accesses to len to be pointer dereferences. Coverity-id: 1457918 Fixes: 5113f8ec3753ed ("xfs: clean up weird while loop in xfs_alloc_ag_vextent_near") Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23ARM: dts: rockchip: fix lvds-encoder ports subnode for rk3188-bqedison2qcJohan Jonker
[ Upstream commit 1a7e99599dffd836fcb720cdc0eaf3cd43d7af4a ] A test with the command below gives this error: arch/arm/boot/dts/rk3188-bqedison2qc.dt.yaml: lvds-encoder: 'ports' is a required property Fix error by adding a ports wrapper for port@0 and port@1 inside the 'lvds-encoder' node for rk3188-bqedison2qc. make ARCH=arm dtbs_check DT_SCHEMA_FILES=Documentation/devicetree/bindings/display/ bridge/lvds-codec.yaml Signed-off-by: Johan Jonker <jbx6244@gmail.com> Link: https://lore.kernel.org/r/20200316174647.5598-1-jbx6244@gmail.com Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23NFSv4.2: error out when relink swapfileMurphy Zhou
[ Upstream commit f5fdf1243fb750598b46305dd03c553949cfa14f ] This fixes xfstests generic/356 failure on NFSv4.2. Signed-off-by: Murphy Zhou <jencce.kernel@gmail.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23NFSv4/pnfs: Return valid stateids in nfs_layout_find_inode_by_stateid()Trond Myklebust
[ Upstream commit d911c57a19551c6bef116a3b55c6b089901aacb0 ] Make sure to test the stateid for validity so that we catch instances where the server may have been reusing stateids in nfs_layout_find_inode_by_stateid(). Fixes: 7b410d9ce460 ("pNFS: Delay getting the layout header in CB_LAYOUTRECALL handlers") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23NFS: alloc_nfs_open_context() must use the file cred when availableTrond Myklebust
[ Upstream commit 1d179d6bd67369a52edea8562154b31ee20be1cc ] If we're creating a nfs_open_context() for a specific file pointer, we must use the cred assigned to that file. Fixes: a52458b48af1 ("NFS/NFSD/SUNRPC: replace generic creds with 'struct cred'.") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23rtc: 88pm860x: fix possible race conditionAlexandre Belloni
[ Upstream commit 9cf4789e6e4673d0b2c96fa6bb0c35e81b43111a ] The RTC IRQ is requested before the struct rtc_device is allocated, this may lead to a NULL pointer dereference in the IRQ handler. To fix this issue, allocating the rtc_device struct before requesting the RTC IRQ using devm_rtc_allocate_device, and use rtc_register_device to register the RTC device. Also remove the unnecessary error message as the core already prints the info. Link: https://lore.kernel.org/r/20200311223956.51352-1-alexandre.belloni@bootlin.com Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23dma-coherent: fix integer overflow in the reserved-memory dma allocationKevin Grandemange
[ Upstream commit 286c21de32b904131f8cf6a36ce40b8b0c9c5da3 ] pageno is an int and the PAGE_SHIFT shift is done on an int, overflowing if the memory is bigger than 2G This can be reproduced using for example a reserved-memory of 4G reserved-memory { #address-cells = <2>; #size-cells = <2>; ranges; reserved_dma: buffer@0 { compatible = "shared-dma-pool"; no-map; reg = <0x5 0x00000000 0x1 0x0>; }; }; Signed-off-by: Kevin Grandemange <kevin.grandemange@allegrodvt.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Sasha Levin <sashal@kernel.org>