summaryrefslogtreecommitdiffstats
path: root/lib/lmb.c
AgeCommit message (Collapse)Author
2008-08-15lmb: Fix reserved region handling in lmb_enforce_memory_limit().David S. Miller
The idea of the implementation of this fix is from Michael Ellerman. This function has two loops, but they each interpret the memory_limit value differently. The first loop interprets it as a "size limit" whereas the second loop interprets it as an "address limit". Before the second loop runs, reset memory_limit to lmb_end_of_DRAM() so that it all works out. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Michael Ellerman <michael@ellerman.id.au>
2008-05-18lmb: Fix compile warningKumar Gala
lib/lmb.c: In function 'lmb_dump_all': lib/lmb.c:51: warning: format '%lx' expects type 'long unsigned int', but argument 2 has type 'u64' Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-05-12lmb: Make lmb debugging more useful.David S. Miller
Having to muck with the build and set DEBUG just to get lmb_dump_all() to print things isn't very useful. So use pr_info() and use an early boot param "lmb=debug" so we can simply ask users to reboot with this option when we need some debugging from them. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-05-12lmb: Fix inconsistent alignment of size argument.David S. Miller
When allocating, if we will align up the size when making the reservation, we should also align the size for the check that the space is actually available. The simplest thing is to just aling the size up from the beginning, then we can use plain 'size' throughout. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-29[POWERPC] Provide walk_memory_resource() for powerpcBadari Pulavarty
Provide walk_memory_resource() for 64-bit powerpc. PowerPC maintains logical memory region mapping in the lmb.memory structure. Walk through these structures and do the callbacks for the contiguous chunks. Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-29[POWERPC] Update lmb data structures for hotplug memory add/removeBadari Pulavarty
The powerpc kernel maintains information about logical memory blocks in the lmb.memory structure, which is initialized and updated at boot time, but not when memory is added or removed while the kernel is running. This adds a hotplug memory notifier which updates lmb.memory when memory is added or removed. This information is useful for eHEA driver to find out the memory layout and holes. NOTE: No special locking is needed for lmb_add() and lmb_remove(). Calls to these are serialized by caller. (pSeries_reconfig_chain). Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-23[LMB]: Fix lmb allocation regression.David S. Miller
Changeset d9024df02ffe74d723d97d552f86de3b34beb8cc ("[LMB] Restructure allocation loops to avoid unsigned underflow") removed the alignment of the 'size' argument to call lmb_add_region() done by __lmb_alloc_base(). In doing so it reintroduced the bug fixed by changeset eea89e13a9c61d3928223d2f9bf2295e22e0efb6 ("[LMB]: Fix bug in __lmb_alloc_base()."). This puts it back. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-15[LMB] Restructure allocation loops to avoid unsigned underflowPaul Mackerras
There is a potential bug in __lmb_alloc_base where we subtract `size' from the base address of a reserved region without checking whether the subtraction could wrap around and produce a very large unsigned value. In fact it probably isn't possible to hit the bug in practice since it would only occur in the situation where we can't satisfy the allocation request and there is a reserved region starting at 0. This fixes the potential bug by breaking out of the loop when we get to the point where the base of the reserved region is less than the size requested. This also restructures the loop to be a bit easier to follow. The same logic got copied into lmb_alloc_nid_unreserved, so this makes a similar change there. Here the bug is more likely to be hit because the outer loop (in lmb_alloc_nid) goes through the memory regions in increasing order rather than decreasing order as __lmb_alloc_base does, and we are therefore more likely to hit the case where we are testing against a reserved region with a base address of 0. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-15[LMB] Fix some whitespace and other formatting issues, use pr_debugPaul Mackerras
This makes no semantic changes. It fixes the whitespace and formatting a bit, gets rid of a local DBG macro and uses the equivalent pr_debug instead, and restructures one while loop that had a function call and assignment in the condition to be a bit more readable. Some comments about functions being called with relocation disabled were also removed as they would just be confusing to most readers now that the code is in lib/. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-15[LMB] Add lmb_alloc_nid()David S. Miller
A variant of lmb_alloc() that tries to allocate memory on a specified NUMA node 'nid' but falls back to normal lmb_alloc() if that fails. The caller provides a 'nid_range' function pointer which assists the allocator. It is given args 'start', 'end', and pointer to integer 'this_nid'. It places at 'this_nid' the NUMA node id that corresponds to 'start', and returns the end address within 'start' to 'end' at which memory assosciated with 'nid' ends. This callback allows a platform to use lmb_alloc_nid() in just about any context, even ones in which early_pfn_to_nid() might not be working yet. This function will be used by the NUMA setup code on sparc64, and also it can be used by powerpc, replacing it's hand crafted "careful_allocation()" function in arch/powerpc/mm/numa.c If x86 ever converts it's NUMA support over to using the LMB helpers, it can use this too as it has something entirely similar. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-19[LMB]: Fix lmb_add_region if region should be added at the headKumar Gala
We introduced a bug in fixing lmb_add_region to handle an initial region being non-zero. Before that fix it was impossible to insert a region at the head of the list since the first region always started at zero. Now that its possible for the first region to be non-zero we need to check to see if the new region should be added at the head and if so actually add it. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-13[LMB]: Make lmb support large physical addressingBecky Bruce
Convert the lmb code to use u64 instead of unsigned long for physical addresses and sizes. This is needed to support large amounts of RAM on 32-bit systems that support 36-bit physical addressing. Signed-off-by: Becky Bruce <becky.bruce@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-13[LMB]: Fix initial lmb add region with a non-zero baseKumar Gala
If we add to an empty lmb region with a non-zero base we will not coalesce the number of regions down to one. This causes problems on ppc32 for the memory region as its assumed to only have one region. We can fix this be easily specially casing the initial add to just replace the dummy region. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-13[LMB]: Fix bug in __lmb_alloc_base().David S. Miller
We need to check lmb_add_region() for errors, it can run out of regions etc. Also, the size needs to be padded to the given alignment or else the lmb.reserved regions don't get expanded and instead we get tons of holes and eventually run out of regions prematurely. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-13[LIB]: Make PowerPC LMB code generic so sparc64 can use it too.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>