aboutsummaryrefslogtreecommitdiffstats
path: root/arch/mips/mm
AgeCommit message (Collapse)Author
2016-05-28Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds
Pull more MIPS updates from Ralf Baechle: "This is the secondnd batch of MIPS patches for 4.7. Summary: CPS: - Copy EVA configuration when starting secondary VPs. EIC: - Clear Status IPL. Lasat: - Fix a few off by one bugs. lib: - Mark intrinsics notrace. Not only are the intrinsics uninteresting, it would cause infinite recursion. MAINTAINERS: - Add file patterns for MIPS BRCM device tree bindings. - Add file patterns for mips device tree bindings. MT7628: - Fix MT7628 pinmux typos. - wled_an pinmux gpio. - EPHY LEDs pinmux support. Pistachio: - Enable KASLR VDSO: - Build microMIPS VDSO for microMIPS kernels. - Fix aliasing warning by building with `-fno-strict-aliasing' for debugging but also tracing them might result in recursion. Misc: - Add missing FROZEN hotplug notifier transitions. - Fix clk binding example for varioius PIC32 devices. - Fix cpu interrupt controller node-names in the DT files. - Fix XPA CPU feature separation. - Fix write_gc0_* macros when writing zero. - Add inline asm encoding helpers. - Add missing VZ accessor microMIPS encodings. - Fix little endian microMIPS MSA encodings. - Add 64-bit HTW fields and fix its configuration. - Fix sigreturn via VDSO on microMIPS kernel. - Lots of typo fixes. - Add definitions of SegCtl registers and use them" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (49 commits) MIPS: Add missing FROZEN hotplug notifier transitions MIPS: Build microMIPS VDSO for microMIPS kernels MIPS: Fix sigreturn via VDSO on microMIPS kernel MIPS: devicetree: fix cpu interrupt controller node-names MIPS: VDSO: Build with `-fno-strict-aliasing' MIPS: Pistachio: Enable KASLR MIPS: lib: Mark intrinsics notrace MIPS: Fix 64-bit HTW configuration MIPS: Add 64-bit HTW fields MAINTAINERS: Add file patterns for mips device tree bindings MAINTAINERS: Add file patterns for mips brcm device tree bindings MIPS: Simplify DSP instruction encoding macros MIPS: Add missing tlbinvf/XPA microMIPS encodings MIPS: Fix little endian microMIPS MSA encodings MIPS: Add missing VZ accessor microMIPS encodings MIPS: Add inline asm encoding helpers MIPS: Spelling fix lets -> let's MIPS: VR41xx: Fix typo MIPS: oprofile: Fix typo MIPS: math-emu: Fix typo ...
2016-05-28MIPS: Fix 64-bit HTW configurationJames Hogan
The Hardware page Table Walker (HTW) is being misconfigured on 64-bit kernels. The PWSize.PS (pointer size) bit determines whether pointers within directories are loaded as 32-bit or 64-bit addresses, but was never being set to 1 for 64-bit kernels where the unsigned long in pgd_t is 64-bits wide. This actually reduces rather than improves performance when the HTW is enabled on P6600 since the HTW is initiated lots, but walks are all aborted due I think to bad intermediate pointers. Since we were already taking the width of the PTEs into account by setting PWSize.PTEW, which is the left shift applied to the page table index *in addition to* the native pointer size, we also need to reduce PTEW by 1 when PS=1. This is done by calculating PTEW based on the relative size of pte_t compared to pgd_t. Finally in order for the HTW to be used when PS=1, the appropriate XK/XS/XU bits corresponding to the different 64-bit segments need to be set in PWCtl. We enable only XU for now to enable walking for XUSeg. Supporting walking for XKSeg would be a bit more involved so is left for a future patch. It would either require the use of a per-CPU top level base directory if supported by the HTW (a bit like pgd_current but with a second entry pointing at swapper_pg_dir), or the HTW would prepend bit 63 of the address to the global directory index which doesn't really match how we split user and kernel page directories. Fixes: cab25bc7537b ("MIPS: Extend hardware table walking support to MIPS64") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13364/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-28MIPS: Add 64-bit HTW fieldsJames Hogan
Add field definitions for some of the 64-bit specific Hardware page Table Walker (HTW) register fields in PWSize and PWCtl, in preparation for fixing the 64-bit HTW configuration. Also print these fields out along with the others in print_htw_config(). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13363/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-19Merge branch 'akpm' (patches from Andrew)Linus Torvalds
Merge updates from Andrew Morton: - fsnotify fix - poll() timeout fix - a few scripts/ tweaks - debugobjects updates - the (small) ocfs2 queue - Minor fixes to kernel/padata.c - Maybe half of the MM queue * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (117 commits) mm, page_alloc: restore the original nodemask if the fast path allocation failed mm, page_alloc: uninline the bad page part of check_new_page() mm, page_alloc: don't duplicate code in free_pcp_prepare mm, page_alloc: defer debugging checks of pages allocated from the PCP mm, page_alloc: defer debugging checks of freed pages until a PCP drain cpuset: use static key better and convert to new API mm, page_alloc: inline pageblock lookup in page free fast paths mm, page_alloc: remove unnecessary variable from free_pcppages_bulk mm, page_alloc: pull out side effects from free_pages_check mm, page_alloc: un-inline the bad part of free_pages_check mm, page_alloc: check multiple page fields with a single branch mm, page_alloc: remove field from alloc_context mm, page_alloc: avoid looking up the first zone in a zonelist twice mm, page_alloc: shortcut watermark checks for order-0 pages mm, page_alloc: reduce cost of fair zone allocation policy retry mm, page_alloc: shorten the page allocator fast path mm, page_alloc: check once if a zone has isolated pageblocks mm, page_alloc: move __GFP_HARDWALL modifications out of the fastpath mm, page_alloc: simplify last cpupid reset mm, page_alloc: remove unnecessary initialisation from __alloc_pages_nodemask() ...
2016-05-19arch: fix has_transparent_hugepage()Hugh Dickins
I've just discovered that the useful-sounding has_transparent_hugepage() is actually an architecture-dependent minefield: on some arches it only builds if CONFIG_TRANSPARENT_HUGEPAGE=y, on others it's also there when not, but on some of those (arm and arm64) it then gives the wrong answer; and on mips alone it's marked __init, which would crash if called later (but so far it has not been called later). Straighten this out: make it available to all configs, with a sensible default in asm-generic/pgtable.h, removing its definitions from those arches (arc, arm, arm64, sparc, tile) which are served by the default, adding #define has_transparent_hugepage has_transparent_hugepage to those (mips, powerpc, s390, x86) which need to override the default at runtime, and removing the __init from mips (but maybe that kind of code should be avoided after init: set a static variable the first time it's called). Signed-off-by: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andres Lagar-Cavilla <andreslc@google.com> Cc: Yang Shi <yang.shi@linaro.org> Cc: Ning Qu <quning@gmail.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Vineet Gupta <vgupta@synopsys.com> [arch/arc] Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [arch/s390] Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-13MIPS: mm: Panic if an XPA kernel is run without RIXIPaul Burton
XPA kernels hardcode for the presence of RIXI - the PTE format & its handling presume RI & XI bits. Make this dependence explicit by panicing if we run on a system that violates it. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13125/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: mm: Don't do MTHC0 if XPA not presentJames Hogan
Performing an MTHC0 instruction without XPA being present will trigger a reserved instruction exception, therefore conditionalise the use of this instruction when building TLB handlers (build_update_entries()), and in __update_tlb(). This allows an XPA kernel to run on non XPA hardware without that instruction implemented, just like it can run on XPA capable hardware without XPA in use (with the noxpa kernel argument) or with XPA not configured in hardware. [paul.burton@imgtec.com: - Rebase atop other TLB work. - Add "mm" to subject. - Handle the __kmap_pgprot case.] Fixes: c5b367835cfc ("MIPS: Add support for XPA.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13124/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: mm: Simplify build_update_entriesPaul Burton
We can simplify build_update_entries by unifying the code for the 36 bit physical addressing with MIPS32 case with the general case, by using pte_off_ variables in all cases & handling the trivial _PAGE_GLOBAL_SHIFT == 0 case in build_convert_pte_to_entrylo. This leaves XPA as the only special case. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13123/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: mm: Be more explicit about PTE mode bit handlingPaul Burton
The XPA case in iPTE_SW or's in software mode bits to the pte_low value (which is what actually ends up in the high 32 bits of EntryLo...). It does this presuming that only bits in the upper 16 bits of the 32 bit pte_low value will be set. Make this assumption explicit with a BUG_ON. A similar assumption is made for the hardware mode bits, which are or'd in with a single ori instruction. Make that assumption explicit with a BUG_ON too. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13122/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: mm: Pass scratch register through to iPTE_SWPaul Burton
Rather than hardcode a scratch register for the XPA case in iPTE_SW, pass one through from the work registers allocated by the caller. This allows for the XPA path to function correctly regardless of the work registers in use. Without doing this there are cases (where KScratch registers are unavailable) in which iPTE_SW will incorrectly clobber $1 despite it already being in use for the PTE or PTE pointer. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13121/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: mm: Don't clobber $1 on XPA TLB refillJames Hogan
For XPA kernels build_update_entries() uses $1 (at) as a scratch register, but doesn't arrange for it to be preserved, so it will always be clobbered by the TLB refill exception. Although this register normally has a very short lifetime that doesn't cross memory accesses, TLB refills due to instruction fetches (either on a page boundary or after preemption) could clobber live data, and its easy to reproduce the clobber with a little bit of assembler code. Note that the use of a hardware page table walker will partly mask the problem, as the TLB refill handler will not always be invoked. This is fixed by avoiding the use of the extra scratch register. The pte_high parts (going into the lower half of the EntryLo registers) are loaded and manipulated separately so as to keep the PTE pointer around for the other halves (instead of storing in the scratch register), and the pte_low parts (going into the high half of the EntryLo registers) are masked with 0x00ffffff using an ext instruction (instead of loading 0x00ffffff into the scratch register and AND'ing). [paul.burton@imgtec.com: - Rebase atop other TLB work. - Use ext instead of an sll, srl sequence. - Use cpu_has_xpa instead of #ifdefs. - Modify commit subject to include "mm".] Fixes: c5b367835cfc ("MIPS: Add support for XPA.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13120/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic)Paul Burton
There are 2 distinct cases in which a kernel for a MIPS32 CPU (CONFIG_CPU_MIPS32=y) may use 64 bit physical addresses (CONFIG_PHYS_ADDR_T_64BIT=y): - 36 bit physical addressing as used by RMI Alchemy & Netlogic XLP/XLR CPUs. - MIPS32r5 eXtended Physical Addressing (XPA). These 2 cases are distinct in that they require different behaviour from the kernel - the EntryLo registers have different formats. Until Linux v4.1 we only supported the first case, with code conditional upon the 2 aforementioned Kconfig variables being set. Commit c5b367835cfc ("MIPS: Add support for XPA.") added support for the second case, but did so by modifying the code that existed for the first case rather than treating the 2 cases as distinct. Since the EntryLo registers have different formats this breaks the 36 bit Alchemy/XLP/XLR case. Fix this by splitting the 2 cases, with XPA cases now being conditional upon CONFIG_XPA and the non-XPA case matching the code as it existed prior to commit c5b367835cfc ("MIPS: Add support for XPA."). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reported-by: Manuel Lauss <manuel.lauss@gmail.com> Tested-by: Manuel Lauss <manuel.lauss@gmail.com> Fixes: c5b367835cfc ("MIPS: Add support for XPA.") Cc: James Hogan <james.hogan@imgtec.com> Cc: David Daney <david.daney@cavium.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Alex Smith <alex.smith@imgtec.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: stable@vger.kernel.org # v4.1+ Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13119/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READPaul Burton
Ever since support for RI/XI was implemented by commit 6dd9344cfc41 ("MIPS: Implement Read Inhibit/eXecute Inhibit") we've had a mixture of _PAGE_READ & _PAGE_NO_READ bits. Rather than keep both around, switch away from using _PAGE_READ to determine page presence & instead invert the use to _PAGE_NO_READ. Wherever we formerly had no definition for _PAGE_NO_READ, change what was _PAGE_READ to _PAGE_NO_READ. The end result is that we consistently use _PAGE_NO_READ to determine whether a page is readable, regardless of whether RI/XI is implemented. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: David Daney <david.daney@cavium.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Alex Smith <alex.smith@imgtec.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13116/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Fix HTW config on XPA kernel without LPA enabledJames Hogan
The hardware page table walker (HTW) configuration is broken on XPA kernels where XPA couldn't be enabled (either nohtw or the hardware doesn't support it). This is because the PWSize.PTEW field (PTE width) was only set to 8 bytes (an extra shift of 1) in config_htw_params() if PageGrain.ELPA (enable large physical addressing) is set. On an XPA kernel though the size of PTEs is fixed at 8 bytes regardless of whether XPA could actually be enabled. Fix the initialisation of this field based on sizeof(pte_t) instead. Fixes: c5b367835cfc ("MIPS: Add support for XPA.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Steven J. Hill <sjhill@realitydiluted.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13113/ Signed-off-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Retrieve ASID masks using function accepting struct cpuinfo_mipsPaul Burton
In preparation for supporting variable ASID masks, retrieve ASID masks using functions in asm/cpu-info.h which accept struct cpuinfo_mips. This will allow those functions to determine the ASID mask based upon the CPU in a later patch. This also allows for the r3k & r8k cases to be handled in Kconfig, which is arguably cleaner than the previous #ifdefs. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13210/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: remove aliasing alignment if HW has antialising supportLeonid Yegoshin
MIPS hardware may have an antialising support and it works even page size is small. Setup a shared memory aliasing mask to page size if hardware has an antialising support. Big shared memory mask forces a disruption in page address assignment and that corrupts Android library memory handling. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: cernekee@gmail.com Cc: paul.gortmaker@windriver.com Cc: kumba@gentoo.org Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11516/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Loongson-3: Introduce CONFIG_LOONGSON3_ENHANCEMENTHuacai Chen
New Loongson 3 CPU (since Loongson-3A R2, as opposed to Loongson-3A R1, Loongson-3B R1 and Loongson-3B R2) has many enhancements, such as FTLB, L1-VCache, EI/DI/Wait/Prefetch instruction, DSP/DSPv2 ASE, User Local register, Read-Inhibit/Execute-Inhibit, SFB (Store Fill Buffer), Fast TLB refill support, etc. This patch introduce a config option, CONFIG_LOONGSON3_ENHANCEMENT, to enable those enhancements which are not probed at run time. If you want a generic kernel to run on all Loongson 3 machines, please say 'N' here. If you want a high-performance kernel to run on new Loongson 3 machines only, please say 'Y' here. Some additional explanations: 1) SFB locates between core and L1 cache, it causes memory access out of order, so writel/outl (and other similar functions) need a I/O reorder barrier. 2) Loongson 3 has a bug that di instruction can not save the irqflag, so arch_local_irq_save() is modified. Since CPU_MIPSR2 is selected by CONFIG_LOONGSON3_ENHANCEMENT, generic kernel doesn't use ei/di at all. 3) CPU_HAS_PREFETCH is selected by CONFIG_LOONGSON3_ENHANCEMENT, so MIPS_CPU_PREFETCH (used by uasm) probing is also put in this patch. Signed-off-by: Huacai Chen <chenhc@lemote.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Steven J . Hill <sjhill@realitydiluted.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12755/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Loongson-3: Fast TLB refill handlerHuacai Chen
Loongson-3A R2 has pwbase/pwfield/pwsize/pwctl registers in CP0 (this is very similar to HTW) and lwdir/lwpte/lddir/ldpte instructions which can be used for fast TLB refill. [ralf@linux-mips.org: Resolve conflict.] Signed-off-by: Huacai Chen <chenhc@lemote.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Steven J . Hill <sjhill@realitydiluted.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12754/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Loongson: Invalidate special TLBs when neededHuacai Chen
Loongson-2 has a 4 entry itlb which is a subset of jtlb, Loongson-3 has a 4 entry itlb and a 4 entry dtlb which are subsets of jtlb. We should write diag register to invalidate itlb/dtlb when flushing jtlb because itlb/dtlb are not totally transparent to software. For Loongson-3A R2 (and newer), we should invalidate ITLB, DTLB, VTLB and FTLB before we enable/disable FTLB. Signed-off-by: Huacai Chen <chenhc@lemote.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Steven J . Hill <sjhill@realitydiluted.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12753/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Loongson-3: Set cache flush handlers to cache_noopHuacai Chen
Loongson-3 maintains cache coherency by hardware, this means: 1) It's icache is coherent with dcache. 2) It's dcaches don't alias (maybe depend on PAGE_SIZE). 3) It maintains cache coherency across cores (and for DMA). So we can skip most cache flush operations by setting relevant handlers to `cache_noop' in `r4k_cache_init'. Signed-off-by: Huacai Chen <chenhc@lemote.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Steven J . Hill <sjhill@realitydiluted.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12752/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Loongson: Add Loongson-3A R2 basic supportHuacai Chen
Loongson-3 CPU family: Code-name Brand-name PRId Loongson-3A R1 Loongson-3A1000 0x6305 Loongson-3A R2 Loongson-3A2000 0x6308 Loongson-3B R1 Loongson-3B1000 0x6306 Loongson-3B R2 Loongson-3B1500 0x6307 Features of R2 revision of Loongson-3A: - Primary cache includes I-Cache, D-Cache and V-Cache (Victim Cache). - I-Cache, D-Cache and V-Cache are 16-way set-associative, linesize is 64 bytes. - 64 entries of VTLB (classic TLB), 1024 entries of FTLB (8-way set-associative). - Supports DSP/DSPv2 instructions, UserLocal register and Read-Inhibit/ Execute-Inhibit. [ralf@linux-mips.org: Resolved merge conflicts.] Signed-off-by: Huacai Chen <chenhc@lemote.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Steven J . Hill <sjhill@realitydiluted.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12751/ Patchwork: https://patchwork.linux-mips.org/patch/13136/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: BMIPS: local_r4k___flush_cache_all needs to blast S-cacheFlorian Fainelli
local_r4k___flush_cache_all() is missing a special check for BMIPS5000 processors, we need to blast the S-cache, just like other MTI processors since we have an inclusive cache. We also need an additional __sync() to make sure this is completed. Fixes: d74b0172e4e2c ("MIPS: BMIPS: Add special cache handling in c-r4k.c") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13012/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: BMIPS: Clear MIPS_CACHE_ALIASES earlierFlorian Fainelli
BMIPS5000 and BMIPS5200 processor have no D cache aliases, and this is properly handled by the per-CPU override added at the end of r4k_cache_init(), the problem is that the output of probe_pcache() disagrees with that, since this is too late: Primary instruction cache 32kB, VIPT, 4-way, linesize 64 bytes. Primary data cache 32kB, 4-way, VIPT, cache aliases, linesize 32 bytes With the change moved earlier, we now have a consistent output with the settings we are intending to have: Primary instruction cache 32kB, VIPT, 4-way, linesize 64 bytes. Primary data cache 32kB, 4-way, VIPT, no aliases, linesize 32 bytes Fixes: d74b0172e4e2c ("MIPS: BMIPS: Add special cache handling in c-r4k.c") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13011/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: BMIPS: BMIPS5000 has I cache filing from D cacheFlorian Fainelli
BMIPS5000 and BMIPS52000 processors have their I-cache filling from the D-cache. Since BMIPS_GENERIC does not provide (yet) a cpu-feature-overrides.h file, this was not set anywhere, so make sure the R4K cache detection takes care of that. Fixes: d74b0172e4e2c ("MIPS: BMIPS: Add special cache handling in c-r4k.c") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13010/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Sync icache & dcache in set_pte_atPaul Burton
It's possible for pages to become visible prior to update_mmu_cache running if a thread within the same address space preempts the current thread or runs simultaneously on another CPU. That is, the following scenario is possible: CPU0 CPU1 write to page flush_dcache_page flush_icache_page set_pte_at map page update_mmu_cache If CPU1 maps the page in between CPU0's set_pte_at, which marks it valid & visible, and update_mmu_cache where the dcache flush occurs then CPU1s icache will fill from stale data (unless it fills from the dcache, in which case all is good, but most MIPS CPUs don't have this property). Commit 4d46a67a3eb8 ("MIPS: Fix race condition in lazy cache flushing.") attempted to fix that by performing the dcache flush in flush_icache_page such that it occurs before the set_pte_at call makes the page visible. However it has the problem that not all code that writes to pages exposed to userland call flush_icache_page. There are many callers of set_pte_at under mm/ and only 2 of them do call flush_icache_page. Thus the race window between a page becoming visible & being coherent between the icache & dcache remains open in some cases. To illustrate some of the cases, a WARN was added to __update_cache with this patch applied that triggered in cases where a page about to be flushed from the dcache was not the last page provided to flush_icache_page. That is, backtraces were obtained for cases in which the race window is left open without this patch. The 2 standout examples follow. When forking a process: [ 15.271842] [<80417630>] __update_cache+0xcc/0x188 [ 15.277274] [<80530394>] copy_page_range+0x56c/0x6ac [ 15.282861] [<8042936c>] copy_process.part.54+0xd40/0x17ac [ 15.289028] [<80429f80>] do_fork+0xe4/0x420 [ 15.293747] [<80413808>] handle_sys+0x128/0x14c When exec'ing an ELF binary: [ 14.445964] [<80417630>] __update_cache+0xcc/0x188 [ 14.451369] [<80538d88>] move_page_tables+0x414/0x498 [ 14.457075] [<8055d848>] setup_arg_pages+0x220/0x318 [ 14.462685] [<805b0f38>] load_elf_binary+0x530/0x12a0 [ 14.468374] [<8055ec3c>] search_binary_handler+0xbc/0x214 [ 14.474444] [<8055f6c0>] do_execveat_common+0x43c/0x67c [ 14.480324] [<8055f938>] do_execve+0x38/0x44 [ 14.485137] [<80413808>] handle_sys+0x128/0x14c These code paths write into a page, call flush_dcache_page then call set_pte_at without flush_icache_page inbetween. The end result is that the icache can become corrupted & userland processes may execute unexpected or invalid code, typically resulting in a reserved instruction exception, a trap or a segfault. Fix this race condition fully by performing any cache maintenance required to keep the icache & dcache in sync in set_pte_at, before the page is made valid. This has the added bonus of ensuring the cache maintenance always happens in one location, rather than being duplicated in flush_icache_page & update_mmu_cache. It also matches the way other architectures solve the same problem (see arm, ia64 & powerpc). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reported-by: Ionela Voinescu <ionela.voinescu@imgtec.com> Cc: Lars Persson <lars.persson@axis.com> Fixes: 4d46a67a3eb8 ("MIPS: Fix race condition in lazy cache flushing.") Cc: Steven J. Hill <sjhill@realitydiluted.com> Cc: David Daney <david.daney@cavium.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: stable <stable@vger.kernel.org> # v4.1+ Patchwork: https://patchwork.linux-mips.org/patch/12722/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Handle highmem pages in __update_cachePaul Burton
The following patch will expose __update_cache to highmem pages. Handle them by mapping them in for the duration of the cache maintenance, just like in __flush_dcache_page. The code for that isn't shared because we need the page address in __update_cache so sharing became messy. Given that the entirity is an extra 5 lines, just duplicate it. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Lars Persson <lars.persson@axis.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: stable <stable@vger.kernel.org> # v4.1+ Patchwork: https://patchwork.linux-mips.org/patch/12721/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Flush highmem pages in __flush_dcache_pagePaul Burton
When flush_dcache_page is called on an executable page, that page is about to be provided to userland & we can presume that the icache contains no valid entries for its address range. However if the icache does not fill from the dcache then we cannot presume that the pages content has been written back as far as the memories that the dcache will fill from (ie. L2 or further out). This was being done for lowmem pages, but not for highmem which can lead to icache corruption. Fix this by mapping highmem pages & flushing their content from the dcache in __flush_dcache_page before providing the page to userland, just as is done for lowmem pages. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Lars Persson <lars.persson@axis.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12720/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Add M6250 cases to CPU switch statementsPaul Burton
Add casses supporting the M6250 CPU to various switch statements in the core MIPS kernel code that define behaviour dependent upon the CPU. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Joshua Kinard <kumba@gentoo.org> Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Maciej W. Rozycki <macro@codesourcery.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12374/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-13MIPS: Add P6600 cases to CPU switch statementsPaul Burton
Add cases supporting the P6600 CPU to various switch statements in core MIPS kernel code that define behaviour dependent upon the CPU. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Maciej W. Rozycki <macro@imgtec.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Joshua Kinard <kumba@gentoo.org> Cc: Andrzej Hajda <a.hajda@samsung.com> Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Petri Gynther <pgynther@google.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12343/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-09MIPS: dma-default: Defend against NULL dev in massage_gfp_flagsMatt Redfearn
This patch ensures that the dev parameter is checked for NULL before it is dereferenced in massage_gfp_flags. If dev is NULL, then fall back setting the GFP flag requested and available. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/11919/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-09MIPS: I6400: Icache fills from dcacheJames Hogan
Coherence Manager 3 (CM3) as present in I6400 can fill icache lines effectively from dirty dcaches, so there is no need to flush dirty lines from dcaches through to L2 prior to icache invalidation. Set the MIPS_CACHE_IC_F_DC flag such that cpu_has_ic_fills_f_dc evaluates to true, which avoids those dcache flushes. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: Manuel Lauss <manuel.lauss@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12180/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-09MIPS: c-r4k: Sync icache when it fills from dcacheJames Hogan
It is still necessary to handle icache coherency in flush_cache_range() and copy_to_user_page() when the icache fills from the dcache, even though the dcache does not need to be written back. However when this handling was added in commit 2eaa7ec286db ("[MIPS] Handle I-cache coherency in flush_cache_range()"), it did not do any icache flushing when it fills from dcache. Therefore fix r4k_flush_cache_range() to run local_r4k_flush_cache_range() without taking into account whether icache fills from dcache, so that the icache coherency gets handled. Checks are also added in local_r4k_flush_cache_range() so that the dcache blast doesn't take place when icache fills from dcache. A test to mmap a page PROT_READ|PROT_WRITE, modify code in it, and mprotect it to VM_READ|VM_EXEC (similar to case described in above commit) can hit this case quite easily to verify the fix. A similar check was added in commit f8829caee311 ("[MIPS] Fix aliasing bug in copy_to_user_page / copy_from_user_page"), so also fix copy_to_user_page() similarly, to call flush_cache_page() without taking into account whether icache fills from dcache, since flush_cache_page() already takes that into account to avoid performing a dcache flush. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: Manuel Lauss <manuel.lauss@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12179/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-04-03MIPS: Fix misspellings in comments.Adam Buchbinder
Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: linux-mips@linux-mips.org Cc: trivial@kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12617/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-04-03MIPS: tlb-r4k: panic if the MMU doesn't support PAGE_SIZEPaul Burton
After writing the appropriate mask to the cop0 PageMask register, read the register back & check it matches what we want. If it doesn't then the MMU does not support the page size the kernel is configured for and we're better off bailing than continuing to do odd things with TLB exceptions. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Joshua Kinard <kumba@gentoo.org> Cc: Rafał Miłecki <zajec5@gmail.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/10691/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-03-20Merge branch 'mm-pkeys-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 protection key support from Ingo Molnar: "This tree adds support for a new memory protection hardware feature that is available in upcoming Intel CPUs: 'protection keys' (pkeys). There's a background article at LWN.net: https://lwn.net/Articles/643797/ The gist is that protection keys allow the encoding of user-controllable permission masks in the pte. So instead of having a fixed protection mask in the pte (which needs a system call to change and works on a per page basis), the user can map a (handful of) protection mask variants and can change the masks runtime relatively cheaply, without having to change every single page in the affected virtual memory range. This allows the dynamic switching of the protection bits of large amounts of virtual memory, via user-space instructions. It also allows more precise control of MMU permission bits: for example the executable bit is separate from the read bit (see more about that below). This tree adds the MM infrastructure and low level x86 glue needed for that, plus it adds a high level API to make use of protection keys - if a user-space application calls: mmap(..., PROT_EXEC); or mprotect(ptr, sz, PROT_EXEC); (note PROT_EXEC-only, without PROT_READ/WRITE), the kernel will notice this special case, and will set a special protection key on this memory range. It also sets the appropriate bits in the Protection Keys User Rights (PKRU) register so that the memory becomes unreadable and unwritable. So using protection keys the kernel is able to implement 'true' PROT_EXEC on x86 CPUs: without protection keys PROT_EXEC implies PROT_READ as well. Unreadable executable mappings have security advantages: they cannot be read via information leaks to figure out ASLR details, nor can they be scanned for ROP gadgets - and they cannot be used by exploits for data purposes either. We know about no user-space code that relies on pure PROT_EXEC mappings today, but binary loaders could start making use of this new feature to map binaries and libraries in a more secure fashion. There is other pending pkeys work that offers more high level system call APIs to manage protection keys - but those are not part of this pull request. Right now there's a Kconfig that controls this feature (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) that is default enabled (like most x86 CPU feature enablement code that has no runtime overhead), but it's not user-configurable at the moment. If there's any serious problem with this then we can make it configurable and/or flip the default" * 'mm-pkeys-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (38 commits) x86/mm/pkeys: Fix mismerge of protection keys CPUID bits mm/pkeys: Fix siginfo ABI breakage caused by new u64 field x86/mm/pkeys: Fix access_error() denial of writes to write-only VMA mm/core, x86/mm/pkeys: Add execute-only protection keys support x86/mm/pkeys: Create an x86 arch_calc_vm_prot_bits() for VMA flags x86/mm/pkeys: Allow kernel to modify user pkey rights register x86/fpu: Allow setting of XSAVE state x86/mm: Factor out LDT init from context init mm/core, x86/mm/pkeys: Add arch_validate_pkey() mm/core, arch, powerpc: Pass a protection key in to calc_vm_flag_bits() x86/mm/pkeys: Actually enable Memory Protection Keys in the CPU x86/mm/pkeys: Add Kconfig prompt to existing config option x86/mm/pkeys: Dump pkey from VMA in /proc/pid/smaps x86/mm/pkeys: Dump PKRU with other kernel registers mm/core, x86/mm/pkeys: Differentiate instruction fetches x86/mm/pkeys: Optimize fault handling in access_error() mm/core: Do not enforce PKEY permissions on remote mm access um, pkeys: Add UML arch_*_access_permitted() methods mm/gup, x86/mm/pkeys: Check VMAs and PTEs for protection keys x86/mm/gup: Simplify get_user_pages() PTE bit handling ...
2016-03-17mm: introduce page reference manipulation functionsJoonsoo Kim
The success of CMA allocation largely depends on the success of migration and key factor of it is page reference count. Until now, page reference is manipulated by direct calling atomic functions so we cannot follow up who and where manipulate it. Then, it is hard to find actual reason of CMA allocation failure. CMA allocation should be guaranteed to succeed so finding offending place is really important. In this patch, call sites where page reference is manipulated are converted to introduced wrapper function. This is preparation step to add tracepoint to each page reference manipulation function. With this facility, we can easily find reason of CMA allocation failure. There is no functional change in this patch. In addition, this patch also converts reference read sites. It will help a second step that renames page._count to something else and prevents later attempt to direct access to it (Suggested by Andrew). Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Michal Nazarewicz <mina86@mina86.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Minchan Kim <minchan@kernel.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-29MIPS: scache: Fix scache init with invalid line size.Govindraj Raja
In current scache init cache line_size is determined from cpu config register, however if there there no scache then mips_sc_probe_cm3 function populates a invalid line_size of 2. The invalid line_size can cause a NULL pointer deference during r4k_dma_cache_inv as r4k_blast_scache is populated based on line_size. Scache line_size of 2 is invalid option in r4k_blast_scache_setup. This issue was faced during a MIPS I6400 based virtual platform bring up where scache was not available in virtual platform model. Signed-off-by: Govindraj Raja <Govindraj.Raja@imgtec.com> Fixes: 7d53e9c4cd21("MIPS: CM3: Add support for CM3 L2 cache.") Cc: Paul Burton <paul.burton@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hartley <James.Hartley@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # v4.2+ Patchwork: https://patchwork.linux-mips.org/patch/12710/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-02-27mm: ASLR: use get_random_long()Daniel Cashman
Replace calls to get_random_int() followed by a cast to (unsigned long) with calls to get_random_long(). Also address shifting bug which, in case of x86 removed entropy mask for mmap_rnd_bits values > 31 bits. Signed-off-by: Daniel Cashman <dcashman@android.com> Acked-by: Kees Cook <keescook@chromium.org> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: David S. Miller <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Nick Kralevich <nnk@google.com> Cc: Jeff Vander Stoep <jeffv@google.com> Cc: Mark Salyzyn <salyzyn@android.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-16mm/gup: Switch all callers of get_user_pages() to not pass tsk/mmDave Hansen
We will soon modify the vanilla get_user_pages() so it can no longer be used on mm/tasks other than 'current/current->mm', which is by far the most common way it is called. For now, we allow the old-style calls, but warn when they are used. (implemented in previous patch) This patch switches all callers of: get_user_pages() get_user_pages_unlocked() get_user_pages_locked() to stop passing tsk/mm so they will no longer see the warnings. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave@sr71.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: jack@suse.cz Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20160212210156.113E9407@viggo.jf.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-09MIPS: Fix early CM probingPaul Burton
Commit c014d164f21d ("MIPS: Add platform callback before initializing the L2 cache") added a platform_early_l2_init function in order to allow platforms to probe for the CM before L2 initialisation is performed, so that CM GCRs are available to mips_sc_probe. That commit actually fails to do anything useful, since it checks mips_cm_revision to determine whether it should call mips_cm_probe but the result of mips_cm_revision will always be 0 until mips_cm_probe has been called. Thus the "early" mips_cm_probe call never occurs. Fix this & drop the useless weak platform_early_l2_init function by simply calling mips_cm_probe from setup_arch. For platforms that don't select CONFIG_MIPS_CM this will be a no-op, and for those that do it removes the requirement for them to call mips_cm_probe manually (although doing so isn't harmful for now). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Cc: Andrzej Hajda <a.hajda@samsung.com> Cc: Aaro Koskinen <aaro.koskinen@nokia.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Rob Herring <robh@kernel.org> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: Jaedon Shin <jaedon.shin@gmail.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12475/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-01-24Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds
Pull MIPS updates from Ralf Baechle: "This is the main pull request for MIPS for 4.5 plus some 4.4 fixes. The executive summary: - ATH79 platform improvments, use DT bindings for the ATH79 USB PHY. - Avoid useless rebuilds for zboot. - jz4780: Add NEMC, BCH and NAND device tree nodes - Initial support for the MicroChip's DT platform. As all the device drivers are missing this is still of limited use. - Some Loongson3 cleanups. - The unavoidable whitespace polishing. - Reduce clock skew when synchronizing the CPU cycle counters on CPU startup. - Add MIPS R6 fixes. - Lots of cleanups across arch/mips as fallout from KVM. - Lots of minor fixes and changes for IEEE 754-2008 support to the FPU emulator / fp-assist software. - Minor Ralink, BCM47xx and bcm963xx platform support improvments. - Support SMP on BCM63168" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (84 commits) MIPS: zboot: Add support for serial debug using the PROM MIPS: zboot: Avoid useless rebuilds MIPS: BMIPS: Enable ARCH_WANT_OPTIONAL_GPIOLIB MIPS: bcm63xx: nvram: Remove unused bcm63xx_nvram_get_psi_size() function MIPS: bcm963xx: Update bcm_tag field image_sequence MIPS: bcm963xx: Move extended flash address to bcm_tag header file MIPS: bcm963xx: Move Broadcom BCM963xx image tag data structure MIPS: bcm63xx: nvram: Use nvram structure definition from header file MIPS: bcm963xx: Add Broadcom BCM963xx board nvram data structure MAINTAINERS: Add KVM for MIPS entry MIPS: KVM: Add missing newline to kvm_err() MIPS: Move KVM specific opcodes into asm/inst.h MIPS: KVM: Use cacheops.h definitions MIPS: Break down cacheops.h definitions MIPS: Use EXCCODE_ constants with set_except_vector() MIPS: Update trap codes MIPS: Move Cause.ExcCode trap codes to mipsregs.h MIPS: KVM: Make kvm_mips_{init,exit}() static MIPS: KVM: Refactor added offsetof()s MIPS: KVM: Convert EXPORT_SYMBOL to _GPL ...
2016-01-24MIPS: Fix some missing CONFIG_CPU_MIPSR6 #ifdefsHuacai Chen
Commit be0c37c985eddc4 (MIPS: Rearrange PTE bits into fixed positions.) defines fixed PTE bits for MIPS R2. Then, commit d7b631419b3d230a4d383 (MIPS: pgtable-bits: Fix XPA damage to R6 definitions.) adds the MIPS R6 definitions in the same way as MIPS R2. But some R6 #ifdefs in the later commit are missing, so in this patch I fix that. Signed-off-by: Huacai Chen <chenhc@lemote.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12164/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-01-15mm: differentiate page_mapped() from page_mapcount() for compound pagesKirill A. Shutemov
Let's define page_mapped() to be true for compound pages if any sub-pages of the compound page is mapped (with PMD or PTE). On other hand page_mapcount() return mapcount for this particular small page. This will make cases like page_get_anon_vma() behave correctly once we allow huge pages to be mapped with PTE. Most users outside core-mm should use page_mapcount() instead of page_mapped(). Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: Sasha Levin <sasha.levin@oracle.com> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-15mips, thp: remove infrastructure for handling splitting PMDsKirill A. Shutemov
With new refcounting we don't need to mark PMDs splitting. Let's drop code to handle this. pmdp_splitting_flush() is not needed too: on splitting PMD we will do pmdp_clear_flush() + set_pte_at(). pmdp_clear_flush() will do IPI as needed for fast_gup. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-15mm: drop tail page refcountingKirill A. Shutemov
Tail page refcounting is utterly complicated and painful to support. It uses ->_mapcount on tail pages to store how many times this page is pinned. get_page() bumps ->_mapcount on tail page in addition to ->_count on head. This information is required by split_huge_page() to be able to distribute pins from head of compound page to tails during the split. We will need ->_mapcount to account PTE mappings of subpages of the compound page. We eliminate need in current meaning of ->_mapcount in tail pages by forbidding split entirely if the page is pinned. The only user of tail page refcounting is THP which is marked BROKEN for now. Let's drop all this mess. It makes get_page() and put_page() much simpler. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: Sasha Levin <sasha.levin@oracle.com> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-12-12MIPS: fix DMA contiguous allocationQais Yousef
Recent changes to how GFP_ATOMIC is defined seems to have broken the condition to use mips_alloc_from_contiguous() in mips_dma_alloc_coherent(). I couldn't bottom out the exact change but I think it's this commit d0164adc89f6 ("mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd"). GFP_ATOMIC has multiple bits set and the check for !(gfp & GFP_ATOMIC) isn't enough. The reason behind this condition is to check whether we can potentially do a sleeping memory allocation. Use gfpflags_allow_blocking() instead which should be more robust. Signed-off-by: Qais Yousef <qais.yousef@imgtec.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-11-15Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds
Pull MIPS updates from Ralf Baechle: "These are the highlists of the main MIPS pull request for 4.4: - Add latencytop support - Support appended DTBs - VDSO support and initially use it for gettimeofday. - Drop the .MIPS.abiflags and ELF NOTE sections from vmlinux - Support for the 5KE, an internal test core. - Switch all MIPS platfroms to libata drivers. - Improved support, cleanups for ralink and Lantiq platforms. - Support for the new xilfpga platform. - A number of DTB improvments for BMIPS. - Improved support for CM and CPS. - Minor JZ4740 and BCM47xx enhancements" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (120 commits) MIPS: idle: add case for CPU_5KE MIPS: Octeon: Support APPENDED_DTB MIPS: vmlinux: create a section for appended DTB MIPS: Clean up compat_siginfo_t MIPS: Fix PAGE_MASK definition MIPS: BMIPS: Enable GZIP ramdisk and timed printks MIPS: Add xilfpga defconfig MIPS: xilfpga: Add mipsfpga platform code MIPS: xilfpga: Add xilfpga device tree files. dt-bindings: MIPS: Document xilfpga bindings and boot style MIPS: Make MIPS_CMDLINE_DTB default MIPS: Make the kernel arguments from dtb available MIPS: Use USE_OF as the guard for appended dtb MIPS: BCM63XX: Use pr_* instead of printk MIPS: Loongson: Cleanup CONFIG_LOONGSON_SUSPEND. MIPS: lantiq: Disable xbar fpi burst mode MIPS: lantiq: Force the crossbar to big endian MIPS: lantiq: Initialize the USB core on boot MIPS: lantiq: Return correct value for fpi clock on ar9 MIPS: ralink: Add missing clock on rt305x ...
2015-11-11MIPS: Extend hardware table walking support to MIPS64Paul Burton
Extend the existing support for Hardware Table Walking (HTW) to MIPS64 systems by supporting PMDs & setting the pointer size bit in PWSize, then ceasing to blacklist HTW on MIPS64 systems. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Joshua Kinard <kumba@gentoo.org> Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-kernel@vger.kernel.org Cc: James Hogan <james.hogan@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Patchwork: https://patchwork.linux-mips.org/patch/11224/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-11-11MIPS: tlbex: Avoid placing software PTE bits in Entry* PFN fieldsPaul Burton
Commit 748e787eb6de ("MIPS: Optimize TLB refill for RI/XI configurations.") stopped explicitly clearing the bits used by software in PTEs by making use of a rotate instruction that rotates them into the fill bits of the Entry{Lo,Hi} register. This can only work if there are actually enough fill bits in the register to cover the software maintained bits, otherwise we end up writing those bits into the upper bits of the PFN or PFNX field of the Entry{Lo,Hi} register. Fix this by detecting the number of fill bits present in the Entry{Lo,Hi} registers & explicitly clearing the software bits where necessary. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-kernel@vger.kernel.org Cc: James Hogan <james.hogan@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Patchwork: https://patchwork.linux-mips.org/patch/11218/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-11-11MIPS: tlbex: Share MIPS32 32 bit phys & MIPS64 64 bit phys codePaul Burton
The code in build_update_entries for 64 bit physical addresses on a MIPS64 CPU and 32 bit physical addresses on a MIPS32 CPU is now identical, with the exception of r4k bug workaround in the latter which would simply not apply to the former. Remove the duplication and some Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: linux-kernel@vger.kernel.org Cc: James Hogan <james.hogan@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Patchwork: https://patchwork.linux-mips.org/patch/11216/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>