aboutsummaryrefslogtreecommitdiffstats
path: root/meta-amd-bsp/recipes-kernel/linux/linux-yocto-4.14.71/1365-mm-remove-__GFP_COLD.patch
diff options
context:
space:
mode:
Diffstat (limited to 'meta-amd-bsp/recipes-kernel/linux/linux-yocto-4.14.71/1365-mm-remove-__GFP_COLD.patch')
-rw-r--r--meta-amd-bsp/recipes-kernel/linux/linux-yocto-4.14.71/1365-mm-remove-__GFP_COLD.patch58
1 files changed, 58 insertions, 0 deletions
diff --git a/meta-amd-bsp/recipes-kernel/linux/linux-yocto-4.14.71/1365-mm-remove-__GFP_COLD.patch b/meta-amd-bsp/recipes-kernel/linux/linux-yocto-4.14.71/1365-mm-remove-__GFP_COLD.patch
new file mode 100644
index 00000000..dd653855
--- /dev/null
+++ b/meta-amd-bsp/recipes-kernel/linux/linux-yocto-4.14.71/1365-mm-remove-__GFP_COLD.patch
@@ -0,0 +1,58 @@
+From aed13d628771d200027f5ba415b5b1cc75a18d9d Mon Sep 17 00:00:00 2001
+From: Mel Gorman <mgorman@techsingularity.net>
+Date: Wed, 15 Nov 2017 17:38:03 -0800
+Subject: [PATCH 1365/4131] mm: remove __GFP_COLD
+
+As the page free path makes no distinction between cache hot and cold
+pages, there is no real useful ordering of pages in the free list that
+allocation requests can take advantage of. Juding from the users of
+__GFP_COLD, it is likely that a number of them are the result of copying
+other sites instead of actually measuring the impact. Remove the
+__GFP_COLD parameter which simplifies a number of paths in the page
+allocator.
+
+This is potentially controversial but bear in mind that the size of the
+per-cpu pagelists versus modern cache sizes means that the whole per-cpu
+list can often fit in the L3 cache. Hence, there is only a potential
+benefit for microbenchmarks that alloc/free pages in a tight loop. It's
+even worse when THP is taken into account which has little or no chance
+of getting a cache-hot page as the per-cpu list is bypassed and the
+zeroing of multiple pages will thrash the cache anyway.
+
+The truncate microbenchmarks are not shown as this patch affects the
+allocation path and not the free path. A page fault microbenchmark was
+tested but it showed no sigificant difference which is not surprising
+given that the __GFP_COLD branches are a miniscule percentage of the
+fault path.
+
+Link: http://lkml.kernel.org/r/20171018075952.10627-9-mgorman@techsingularity.net
+Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
+Acked-by: Vlastimil Babka <vbabka@suse.cz>
+Cc: Andi Kleen <ak@linux.intel.com>
+Cc: Dave Chinner <david@fromorbit.com>
+Cc: Dave Hansen <dave.hansen@intel.com>
+Cc: Jan Kara <jack@suse.cz>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Sudheesh Mavila <sudheesh.mavila@amd.com>
+---
+ drivers/net/ethernet/amd/xgbe/xgbe-desc.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+index 45d9230..cc1e4f8 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+@@ -295,7 +295,7 @@ static int xgbe_alloc_pages(struct xgbe_prv_data *pdata,
+ order = alloc_order;
+
+ /* Try to obtain pages, decreasing order if necessary */
+- gfp = GFP_ATOMIC | __GFP_COLD | __GFP_COMP | __GFP_NOWARN;
++ gfp = GFP_ATOMIC | __GFP_COMP | __GFP_NOWARN;
+ while (order >= 0) {
+ pages = alloc_pages_node(node, gfp, order);
+ if (pages)
+--
+2.7.4
+