summaryrefslogtreecommitdiffstats
path: root/arch/sh/include/asm/tlb.h
AgeCommit message (Collapse)Author
2011-05-31sh: asm/tlb.h needs linux/swap.hNobuhiro Iwamatsu
Commit 1e56a56410bb64bce62d44563e35a143fc2d515f introduced the mmu_gather rework for sh, but missed a linux/swap.h include: CC arch/sh/mm/tlb-urb.o In file included from arch/sh/mm/tlb-urb.c:14:0: arch/sh/include/asm/tlb.h: In function '__tlb_remove_page': arch/sh/include/asm/tlb.h:92:2: error: implicit declaration of function 'free_page_and_swap_cache' Signed-off-by: Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com> CC: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2011-05-25sh: mmu_gather reworkPeter Zijlstra
Fix up the sh mmu_gather code to conform to the new API. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Miller <davem@davemloft.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Tony Luck <tony.luck@intel.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hughd@google.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Namhyung Kim <namhyung@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-19sh: Split out MMUCR.URB based entry wiring in to shared helper.Paul Mundt
Presently this is duplicated between tlb-sh4 and tlb-pteaex. Split the helpers out in to a generic tlb-urb that can be used by any parts equipped with MMUCR.URB. At the same time, move the SH-5 code out-of-line, as we require single global state for DTLB entry wiring. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-16sh: Acquire some more page flags for SH-5.Matt Fleming
We need some more page flags to hook up _PAGE_WIRED (and eventually other things). So use the unused PTE bits above the PPN field as no implementations use these for anything currently. Now that we have _PAGE_WIRED let's provide the SH-5 functions for wiring up TLB entries. Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-16sh: New extended page flag to wire/unwire TLB entriesMatt Fleming
Provide a new extended page flag, _PAGE_WIRED and an SH4 implementation for wiring TLB entries and use it in the fixmap code path so that we can wire the fixmap TLB entry. Signed-off-by: Matt Fleming <matt@console-pimps.org>
2009-07-27mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()Benjamin Herrenschmidt
mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() Upcoming paches to support the new 64-bit "BookE" powerpc architecture will need to have the virtual address corresponding to PTE page when freeing it, due to the way the HW table walker works. Basically, the TLB can be loaded with "large" pages that cover the whole virtual space (well, sort-of, half of it actually) represented by a PTE page, and which contain an "indirect" bit indicating that this TLB entry RPN points to an array of PTEs from which the TLB can then create direct entries. Thus, in order to invalidate those when PTE pages are deleted, we need the virtual address to pass to tlbilx or tlbivax instructions. The old trick of sticking it somewhere in the PTE page struct page sucks too much, the address is almost readily available in all call sites and almost everybody implemets these as macros, so we may as well add the argument everywhere. I added it to the pmd and pud variants for consistency. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV] Acked-by: Nick Piggin <npiggin@suse.de> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-03-17sh: Flush only the needed range when unmapping a VMA.Paul Mundt
This follows the ARM change from Aaro Koskinen: When unmapping N pages (e.g. shared memory) the amount of TLB flushes done can be (N*PAGE_SIZE/ZAP_BLOCK_SIZE)*N although it should be N at maximum. With PREEMPT kernel ZAP_BLOCK_SIZE is 8 pages, so there is a noticeable performance penalty when unmapping a large VMA and the system is spending its time in flush_tlb_range(). The problem is that tlb_end_vma() is always flushing the full VMA range. The subrange that needs to be flushed can be calculated by tlb_remove_tlb_entry(). This approach was suggested by Hugh Dickins, and is also used by other arches. The speed increase is roughly 3x for 8M mappings and for larger mappings even more. Bits and peices are taken from the ARM patch as well as the existing arch/um implementation that is quite similar. The end result is a significant reduction in both partial and full TLB flushes initiated through flush_tlb_range(). At the same time, the nommu implementation was broken, had a superfluous cache flush, and subsequently would have triggered a BUG_ON() if a code-path had triggered it. Tidy this up for correctness and provide a nopped-out implementation there. More background on the initial discussion can be found at: http://marc.info/?t=123609820900002&r=1&w=2 http://marc.info/?t=123660375800003&r=1&w=2 Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-07-29sh: migrate to arch/sh/include/Paul Mundt
This follows the sparc changes a439fe51a1f8eb087c22dd24d69cebae4a3addac. Most of the moving about was done with Sam's directions at: http://marc.info/?l=linux-sh&m=121724823706062&w=2 with subsequent hacking and fixups entirely my fault. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>