summaryrefslogtreecommitdiffstats
path: root/include
AgeCommit message (Collapse)Author
2013-04-10thermal: shorten too long mcast group nameMasatake YAMATO
[ Upstream commits 73214f5d9f33b79918b1f7babddd5c8af28dd23d and f1e79e208076ffe7bad97158275f1c572c04f5c7, the latter adds an assertion to genetlink to prevent this from happening again in the future. ] The original name is too long. Signed-off-by: Masatake YAMATO <yamato@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-27signal: Define __ARCH_HAS_SA_RESTORER so we know whether to clear sa_restorerBen Hutchings
flush_signal_handlers() needs to know whether sigaction::sa_restorer is defined, not whether SA_RESTORER is defined. Define the __ARCH_HAS_SA_RESTORER macro to indicate this. Vaguely based on upstream commit 574c4866e33d 'consolidate kernel-side struct sigaction declarations'. Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Cc: Al Viro <viro@zeniv.linux.org.uk>
2013-03-27efi_pstore: Introducing workqueue updating sysfsSeiji Aguchi
commit a93bc0c6e07ed9bac44700280e65e2945d864fd4 upstream. [Problem] efi_pstore creates sysfs entries, which enable users to access to NVRAM, in a write callback. If a kernel panic happens in an interrupt context, it may fail because it could sleep due to dynamic memory allocations during creating sysfs entries. [Patch Description] This patch removes sysfs operations from a write callback by introducing a workqueue updating sysfs entries which is scheduled after the write callback is called. Also, the workqueue is kicked in a just oops case. A system will go down in other cases such as panic, clean shutdown and emergency restart. And we don't need to create sysfs entries because there is no chance for users to access to them. efi_pstore will be robust against a kernel panic in an interrupt context with this patch. Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com> Acked-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> [bwh: Backported to 3.2: - Adjust contest - Don't check reason in efi_pstore_write(), as it is not given as a parameter - Move up declaration of __efivars] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-27inet: limit length of fragment queue hash table bucket listsHannes Frederic Sowa
[ Upstream commit 5a3da1fe9561828d0ca7eca664b16ec2b9bf0055 ] This patch introduces a constant limit of the fragment queue hash table bucket list lengths. Currently the limit 128 is choosen somewhat arbitrary and just ensures that we can fill up the fragment cache with empty packets up to the default ip_frag_high_thresh limits. It should just protect from list iteration eating considerable amounts of cpu. If we reach the maximum length in one hash bucket a warning is printed. This is implemented on the caller side of inet_frag_find to distinguish between the different users of inet_fragment.c. I dropped the out of memory warning in the ipv4 fragment lookup path, because we already get a warning by the slab allocator. Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Jesper Dangaard Brouer <jbrouer@redhat.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-27tcp: fix skb_availroom()Eric Dumazet
[ Upstream commit 16fad69cfe4adbbfa813de516757b87bcae36d93 ] Chrome OS team reported a crash on a Pixel ChromeBook in TCP stack : https://code.google.com/p/chromium/issues/detail?id=182056 commit a21d45726acac (tcp: avoid order-1 allocations on wifi and tx path) did a poor choice adding an 'avail_size' field to skb, while what we really needed was a 'reserved_tailroom' one. It would have avoided commit 22b4a4f22da (tcp: fix retransmit of partially acked frames) and this commit. Crash occurs because skb_split() is not aware of the 'avail_size' management (and should not be aware) Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Mukesh Agrawal <quiche@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-27ipv4: fix definition of FIB_TABLE_HASHSZDenis V. Lunev
[ Upstream commit 5b9e12dbf92b441b37136ea71dac59f05f2673a9 ] a long time ago by the commit commit 93456b6d7753def8760b423ac6b986eb9d5a4a95 Author: Denis V. Lunev <den@openvz.org> Date: Thu Jan 10 03:23:38 2008 -0800 [IPV4]: Unify access to the routing tables. the defenition of FIB_HASH_TABLE size has obtained wrong dependency: it should depend upon CONFIG_IP_MULTIPLE_TABLES (as was in the original code) but it was depended from CONFIG_IP_ROUTE_MULTIPATH This patch returns the situation to the original state. The problem was spotted by Tingwei Liu. Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Tingwei Liu <tingw.liu@gmail.com> CC: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-27perf,x86: fix link failure for non-Intel configsDavid Rientjes
commit 6c4d3bc99b3341067775efd4d9d13cc8e655fd7c upstream. Commit 1d9d8639c063 ("perf,x86: fix kernel crash with PEBS/BTS after suspend/resume") introduces a link failure since perf_restore_debug_store() is only defined for CONFIG_CPU_SUP_INTEL: arch/x86/power/built-in.o: In function `restore_processor_state': (.text+0x45c): undefined reference to `perf_restore_debug_store' Fix it by defining the dummy function appropriately. Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-27perf,x86: fix kernel crash with PEBS/BTS after suspend/resumeStephane Eranian
commit 1d9d8639c063caf6efc2447f5f26aa637f844ff6 upstream. This patch fixes a kernel crash when using precise sampling (PEBS) after a suspend/resume. Turns out the CPU notifier code is not invoked on CPU0 (BP). Therefore, the DS_AREA (used by PEBS) is not restored properly by the kernel and keeps it power-on/resume value of 0 causing any PEBS measurement to crash when running on CPU0. The workaround is to add a hook in the actual resume code to restore the DS Area MSR value. It is invoked for all CPUS. So for all but CPU0, the DS_AREA will be restored twice but this is harmless. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-20NLS: improve UTF8 -> UTF16 string conversion routineAlan Stern
commit 0720a06a7518c9d0c0125bd5d1f3b6264c55c3dd upstream. The utf8s_to_utf16s conversion routine needs to be improved. Unlike its utf16s_to_utf8s sibling, it doesn't accept arguments specifying the maximum length of the output buffer or the endianness of its 16-bit output. This patch (as1501) adds the two missing arguments, and adjusts the only two places in the kernel where the function is called. A follow-on patch will add a third caller that does utilize the new capabilities. The two conversion routines are still annoyingly inconsistent in the way they handle invalid byte combinations. But that's a subject for a different patch. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> CC: Clemens Ladisch <clemens@ladisch.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-20tty/serial: Add support for Altera serial portLey Foon Tan
commit e06c93cacb82dd147266fd1bdb2d0a0bd45ff2c1 upstream. Add support for Altera 8250/16550 compatible serial port. Signed-off-by: Ley Foon Tan <lftan@altera.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [bwh: Backported to 3.2: adjust filenames, context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-20efi: be more paranoid about available space when creating variablesJosh Boyer
commit 68d929862e29a8b52a7f2f2f86a0600423b093cd upstream. UEFI variables are typically stored in flash. For various reasons, avaiable space is typically not reclaimed immediately upon the deletion of a variable - instead, the system will garbage collect during initialisation after a reboot. Some systems appear to handle this garbage collection extremely poorly, failing if more than 50% of the system flash is in use. This can result in the machine refusing to boot. The safest thing to do for the moment is to forbid writes if they'd end up using more than half of the storage space. We can make this more finegrained later if we come up with a method for identifying the broken machines. Signed-off-by: Matthew Garrett <matthew.garrett@nebula.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> [bwh: Backported to 3.2: - Drop efivarfs changes and unused check_var_size() - Add error codes to include/linux/efi.h, added upstream by commit 5d9db883761a ('efi: Add support for a UEFI variable filesystem') - Add efi_status_to_err(), added upstream by commit 7253eaba7b17 ('efivarfs: Return an error if we fail to read a variable')] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-20efi_pstore: Check remaining space with QueryVariableInfo() before writing dataSeiji Aguchi
commit d80a361d779a9f19498943d1ca84243209cd5647 upstream. [Issue] As discussed in a thread below, Running out of space in EFI isn't a well-tested scenario. And we wouldn't expect all firmware to handle it gracefully. http://marc.info/?l=linux-kernel&m=134305325801789&w=2 On the other hand, current efi_pstore doesn't check a remaining space of storage at writing time. Therefore, efi_pstore may not work if it tries to write a large amount of data. [Patch Description] To avoid handling the situation above, this patch checks if there is a space enough to log with QueryVariableInfo() before writing data. Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com> Acked-by: Mike Waychison <mikew@google.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-20dm: fix truncated status stringsMikulas Patocka
commit fd7c092e711ebab55b2688d3859d95dfd0301f73 upstream. Avoid returning a truncated table or status string instead of setting the DM_BUFFER_FULL_FLAG when the last target of a table fills the buffer. When processing a table or status request, the function retrieve_status calls ti->type->status. If ti->type->status returns non-zero, retrieve_status assumes that the buffer overflowed and sets DM_BUFFER_FULL_FLAG. However, targets don't return non-zero values from their status method on overflow. Most targets returns always zero. If a buffer overflow happens in a target that is not the last in the table, it gets noticed during the next iteration of the loop in retrieve_status; but if a buffer overflow happens in the last target, it goes unnoticed and erroneously truncated data is returned. In the current code, the targets behave in the following way: * dm-crypt returns -ENOMEM if there is not enough space to store the key, but it returns 0 on all other overflows. * dm-thin returns errors from the status method if a disk error happened. This is incorrect because retrieve_status doesn't check the error code, it assumes that all non-zero values mean buffer overflow. * all the other targets always return 0. This patch changes the ti->type->status function to return void (because most targets don't use the return code). Overflow is detected in retrieve_status: if the status method fills up the remaining space completely, it is assumed that buffer overflow happened. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> [bwh: Backported to 3.2: - Adjust context - dm_status_fn doesn't take a status_flags parameter - Bump the last component of each current version (verified not to match any version used in mainline) - Drop changes to dm-verity] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06exec: use -ELOOP for max recursion depthKees Cook
commit d740269867021faf4ce38a449353d2b986c34a67 upstream. To avoid an explosion of request_module calls on a chain of abusive scripts, fail maximum recursion with -ELOOP instead of -ENOEXEC. As soon as maximum recursion depth is hit, the error will fail all the way back up the chain, aborting immediately. This also has the side-effect of stopping the user's shell from attempting to reexecute the top-level file as a shell script. As seen in the dash source: if (cmd != path_bshell && errno == ENOEXEC) { *argv-- = cmd; *argv = cmd = path_bshell; goto repeat; } The above logic was designed for running scripts automatically that lacked the "#!" header, not to re-try failed recursion. On a legitimate -ENOEXEC, things continue to behave as the shell expects. Additionally, when tracking recursion, the binfmt handlers should not be involved. The recursion being tracked is the depth of calls through search_binary_handler(), so that function should be exclusively responsible for tracking the depth. Signed-off-by: Kees Cook <keescook@chromium.org> Cc: halfdog <me@halfdog.net> Cc: P J P <ppandit@redhat.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06usermodehelper: implement UMH_KILLABLEOleg Nesterov
commit d0bd587a80960d7ba7e0c8396e154028c9045c54 upstream. Implement UMH_KILLABLE, should be used along with UMH_WAIT_EXEC/PROC. The caller must ensure that subprocess_info->path/etc can not go away until call_usermodehelper_freeinfo(). call_usermodehelper_exec(UMH_KILLABLE) does wait_for_completion_killable. If it fails, it uses xchg(&sub_info->complete, NULL) to serialize with umh_complete() which does the same xhcg() to access sub_info->complete. If call_usermodehelper_exec wins, it can safely return. umh_complete() should get NULL and call call_usermodehelper_freeinfo(). Otherwise we know that umh_complete() was already called, in this case call_usermodehelper_exec() falls back to wait_for_completion() which should succeed "very soon". Note: UMH_NO_WAIT == -1 but it obviously should not be used with UMH_KILLABLE. We delay the neccessary cleanup to simplify the back porting. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tejun Heo <tj@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-068250: use correct value for PORT_BRCM_TRUMANAGEBen Hutchings
When backporting commit ebebd49a8eab ('8250/16?50: Add support for Broadcom TruManage redirected serial port') I took the next available port type number for PORT_BRCM_TRUMANAGE (22). However, the 8250 port type numbers are exposed to userland through the TIOC{G,S}SERIAL ioctls and so must remain stable. Redefine PORT_BRCM_TRUMANAGE as 25, matching mainline as of commit 85f024401bf807. This leaves port types 22-24 within the valid range for 8250 but not implemented there. Change serial8250_verify_port() to specifically reject these and change serial8250_type() to return "unknown" for them (though I'm not sure why it would ever see them). Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06pstore: Avoid deadlock in panic and emergency-restart pathSeiji Aguchi
commit 9f244e9cfd70c7c0f82d3c92ce772ab2a92d9f64 upstream. [Issue] When pstore is in panic and emergency-restart paths, it may be blocked in those paths because it simply takes spin_lock. This is an example scenario which pstore may hang up in a panic path: - cpuA grabs psinfo->buf_lock - cpuB panics and calls smp_send_stop - smp_send_stop sends IRQ to cpuA - after 1 second, cpuB gives up on cpuA and sends an NMI instead - cpuA is now in an NMI handler while still holding buf_lock - cpuB is deadlocked This case may happen if a firmware has a bug and cpuA is stuck talking with it more than one second. Also, this is a similar scenario in an emergency-restart path: - cpuA grabs psinfo->buf_lock and stucks in a firmware - cpuB kicks emergency-restart via either sysrq-b or hangcheck timer. And then, cpuB is deadlocked by taking psinfo->buf_lock again. [Solution] This patch avoids the deadlocking issues in both panic and emergency_restart paths by introducing a function, is_non_blocking_path(), to check if a cpu can be blocked in current path. With this patch, pstore is not blocked even if another cpu has taken a spin_lock, in those paths by changing from spin_lock_irqsave to spin_trylock_irqsave. In addition, according to a comment of emergency_restart() in kernel/sys.c, spin_lock shouldn't be taken in an emergency_restart path to avoid deadlock. This patch fits the comment below. <snip> /** * emergency_restart - reboot the system * * Without shutting down any hardware or taking any locks * reboot the system. This is called when we know we are in * trouble so this is our best effort to reboot. This is * safe to call in interrupt context. */ void emergency_restart(void) <snip> Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com> Acked-by: Don Zickus <dzickus@redhat.com> Signed-off-by: Tony Luck <tony.luck@intel.com> [bwh: Backported to 3.2: - Adjust context - Add #include <linux/kmsg_dump.h>] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06ipv6: use a stronger hash for tcpEric Dumazet
[ Upstream commit 08dcdbf6a7b9d14c2302c5bd0c5390ddf122f664 ] It looks like its possible to open thousands of TCP IPv6 sessions on a server, all landing in a single slot of TCP hash table. Incoming packets have to lookup sockets in a very long list. We should hash all bits from foreign IPv6 addresses, using a salt and hash mix, not a simple XOR. inet6_ehashfn() can also separately use the ports, instead of xoring them. Reported-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06unbreak automounter support on 64-bit kernel with 32-bit userspace (v2)Helge Deller
commit 4f4ffc3a5398ef9bdbb32db04756d7d34e356fcf upstream. automount-support is broken on the parisc architecture, because the existing #if list does not include a check for defined(__hppa__). The HPPA (parisc) architecture is similiar to other 64bit Linux targets where we have to define autofs_wqt_t (which is passed back and forth to user space) as int type which has a size of 32bit across 32 and 64bit kernels. During the discussion on the mailing list, H. Peter Anvin suggested to invert the #if list since only specific platforms (specifically those who do not have a 32bit userspace, like IA64 and Alpha) should have autofs_wqt_t as unsigned long type. This suggestion is probably the best way to go, since Arm64 (and maybe others?) seems to have a non-working automounter. So in the long run even for other new upcoming architectures this inverted check seem to be the best solution, since it will not require them to change this #if again (unless they are 64bit only). Signed-off-by: Helge Deller <deller@gmx.de> Acked-by: H. Peter Anvin <hpa@zytor.com> Acked-by: Ian Kent <raven@themaw.net> Acked-by: Catalin Marinas <catalin.marinas@arm.com> CC: James Bottomley <James.Bottomley@HansenPartnership.com> CC: Rolf Eike Beer <eike-kernel@sf-tec.de> [bwh: Backported to 3.2: adjust filename] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06nbd: fsync and kill block device on shutdownPaolo Bonzini
commit 3a2d63f87989e01437ba994df5f297528c353d7d upstream. There are two problems with shutdown in the NBD driver. 1: Receiving the NBD_DISCONNECT ioctl does not sync the filesystem. This patch adds the sync operation into __nbd_ioctl()'s NBD_DISCONNECT handler. This is useful because BLKFLSBUF is restricted to processes that have CAP_SYS_ADMIN, and the NBD client may not possess it (fsync of the block device does not sync the filesystem, either). 2: Once we clear the socket we have no guarantee that later reads will come from the same backing storage. The patch adds calls to kill_bdev() in __nbd_ioctl()'s socket clearing code so the page cache is cleaned, lest reads that hit on the page cache will return stale data from the previously-accessible disk. Example: # qemu-nbd -r -c/dev/nbd0 /dev/sr0 # file -s /dev/nbd0 /dev/stdin: # UDF filesystem data (version 1.5) etc. # qemu-nbd -d /dev/nbd0 # qemu-nbd -r -c/dev/nbd0 /dev/sda # file -s /dev/nbd0 /dev/stdin: # UDF filesystem data (version 1.5) etc. While /dev/sda has: # file -s /dev/sda /dev/sda: x86 boot sector; etc. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Acked-by: Paul Clements <Paul.Clements@steeleye.com> Cc: Alex Bligh <alex@alex.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> [bwh: Backported to 3.2: - Adjusted context - s/\bnbd\b/lo/ - Incorporate export of kill_bdev() from commit ff01bb483265 ('fs: move code out of buffer.c')] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06mm: mmu_notifier: have mmu_notifiers use a global SRCU so they may safely ↵Sagi Grimberg
schedule commit 21a92735f660eaecf69a6f2e777f18463760ec32 upstream. With an RCU based mmu_notifier implementation, any callout to mmu_notifier_invalidate_range_{start,end}() or mmu_notifier_invalidate_page() would not be allowed to call schedule() as that could potentially allow a modification to the mmu_notifier structure while it is currently being used. Since srcu allocs 4 machine words per instance per cpu, we may end up with memory exhaustion if we use srcu per mm. So all mms share a global srcu. Note that during large mmu_notifier activity exit & unregister paths might hang for longer periods, but it is tolerable for current mmu_notifier clients. Signed-off-by: Sagi Grimberg <sagig@mellanox.co.il> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Haggai Eran <haggaie@mellanox.com> Cc: "Paul E. McKenney" <paulmck@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06ALSA: usb: Fix Processing Unit Descriptor parsersPawel Moll
commit b531f81b0d70ffbe8d70500512483227cc532608 upstream. Commit 99fc86450c439039d2ef88d06b222fd51a779176 "ALSA: usb-mixer: parse descriptors with structs" introduced a set of useful parsers for descriptors. Unfortunately the parses for the Processing Unit Descriptor came with a very subtle bug... Functions uac_processing_unit_iProcessing() and uac_processing_unit_specific() were indexing the baSourceID array forgetting the fields before the iProcessing and process-specific descriptors. The problem was observed with Sound Blaster Extigy mixer, where nNrModes in Up/Down-mix Processing Unit Descriptor was accessed at offset 10 of the descriptor (value 0) instead of offset 15 (value 7). In result the resulting control had interesting limit values: Simple mixer control 'Channel Routing Mode Select',0 Capabilities: volume volume-joined penum Playback channels: Mono Capture channels: Mono Limits: 0 - -1 Mono: -1 [100%] Fixed by starting from the bmControls, which was calculated correctly, instead of baSourceID. Now the mentioned control is fine: Simple mixer control 'Channel Routing Mode Select',0 Capabilities: volume volume-joined penum Playback channels: Mono Capture channels: Mono Limits: 0 - 6 Mono: 0 [0%] Signed-off-by: Pawel Moll <mail@pawelmoll.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> [bwh: Backported to 3.2: adjust filename] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06target: Fix lookup of dynamic NodeACLs during cached demo-mode operationNicholas Bellinger
commit fcf29481fb8e106daad6688f2e898226ee928992 upstream. This patch fixes a bug in core_tpg_check_initiator_node_acl() -> core_tpg_get_initiator_node_acl() where a dynamically created se_node_acl generated during session login would be skipped during subsequent lookup due to the '!acl->dynamic_node_acl' check, causing a new se_node_acl to be created with a duplicate ->initiatorname. This would occur when a fabric endpoint was configured with TFO->tpg_check_demo_mode()=1 + TPF->tpg_check_demo_mode_cache()=1 preventing the release of an existing se_node_acl during se_session shutdown. Also, drop the unnecessary usage of core_tpg_get_initiator_node_acl() within core_dev_init_initiator_node_lun_acl() that originally required the extra '!acl->dynamic_node_acl' check, and just pass the configfs provided se_node_acl pointer instead. Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> [bwh: Backported to 3.2: adjust context, filename of header] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06pps: Add pps_lookup_dev() functionGeorge Spelvin
commit 513b032c98b4b9414aa4e9b4a315cb1bf0380101 upstream. The PPS serial line discipline wants to attach a PPS device to a tty without changing the tty code to add a struct pps_device * pointer. Since the number of PPS devices in a typical system is generally very low (n=1 is by far the most common), it's practical to search the entire list of allocated pps devices. (We capture the timestamp before the lookup, so the timing isn't affected.) It is a bit ugly that this function, which is part of the in-kernel PPS API, has to be in pps.c as opposed to kapi,c, but that's not something that affects users. Signed-off-by: George Spelvin <linux@horizon.com> Acked-by: Rodolfo Giometti <giometti@enneenne.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06idr: idr_for_each_entry() macroPhilipp Reisner
commit 9749f30f1a387070e6e8351f35aeb829eacc3ab6 upstream. Inspired by the list_for_each_entry() macro Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06fb: Yet another band-aid for fixing lockdep messTakashi Iwai
commit e93a9a868792ad71cdd09d75e5a02d8067473c4e upstream. I've still got lockdep warnings even after Alan's patch, and it seems that yet more band aids are required to paper over similar paths for unbind_con_driver() and unregister_con_driver(). After this hack, lockdep warnings are finally gone. Signed-off-by: Takashi Iwai <tiwai@suse.de> Cc: Alan Cox <alan@linux.intel.com> Cc: Florian Tobias Schandinat <FlorianSchandinat@gmx.de> Cc: Jiri Kosina <jkosina@suse.cz> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06fb: rework locking to fix lock ordering on takeoverAlan Cox
commit 50e244cc793d511b86adea24972f3a7264cae114 upstream. Adjust the console layer to allow a take over call where the caller already holds the locks. Make the fb layer lock in order. This is partly a band aid, the fb layer is terminally confused about the locking rules it uses for its notifiers it seems. [akpm@linux-foundation.org: remove stray non-ascii char, tidy comment] [akpm@linux-foundation.org: export do_take_over_console()] [airlied: cleanup another non-ascii char] Signed-off-by: Alan Cox <alan@linux.intel.com> Cc: Florian Tobias Schandinat <FlorianSchandinat@gmx.de> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Jiri Kosina <jkosina@suse.cz> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06vgacon/vt: clear buffer attributes when we load a 512 character font (v2)Dave Airlie
commit 2a2483072393b27f4336ab068a1f48ca19ff1c1e upstream. When we switch from 256->512 byte font rendering mode, it means the current contents of the screen is being reinterpreted. The bit that holds the high bit of the 9-bit font, may have been previously set, and thus the new font misrenders. The problem case we see is grub2 writes spaces with the bit set, so it ends up with data like 0x820, which gets reinterpreted into 0x120 char which the font translates into G with a circumflex. This flashes up on screen at boot and is quite ugly. A current side effect of this patch though is that any rendering on the screen changes color to a slightly darker color, but at least the screen no longer corrupts. v2: as suggested by hpa, always clear the attribute space, whether we are are going to or from 512 chars. Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-03-06quota: autoload the quota_v2 module for QFMT_VFS_V1 quota formatTheodore Ts'o
commit c3ad83d9efdfe6a86efd44945a781f00c879b7b4 upstream. Otherwise, ext4 file systems with the quota featured enable will get a very confusing "No such process" error message if the quota code is built as a module and the quota_v2 module has not been loaded. Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com> Acked-by: Jan Kara <jack@suse.cz> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-02-20ptrace: introduce signal_wake_up_state() and ptrace_signal_wake_up()Oleg Nesterov
commit 910ffdb18a6408e14febbb6e4b6840fd2c928c82 upstream. Cleanup and preparation for the next change. signal_wake_up(resume => true) is overused. None of ptrace/jctl callers actually want to wakeup a TASK_WAKEKILL task, but they can't specify the necessary mask. Turn signal_wake_up() into signal_wake_up_state(state), reintroduce signal_wake_up() as a trivial helper, and add ptrace_signal_wake_up() which adds __TASK_TRACED. This way ptrace_signal_wake_up() can work "inside" ptrace_request() even if the tracee doesn't have the TASK_WAKEKILL bit set. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-02-06printk: fix buffer overflow when calling log_prefix function from ↵Alexandre SIMON
call_console_drivers This patch corrects a buffer overflow in kernels from 3.0 to 3.4 when calling log_prefix() function from call_console_drivers(). This bug existed in previous releases but has been revealed with commit 162a7e7500f9664636e649ba59defe541b7c2c60 (2.6.39 => 3.0) that made changes about how to allocate memory for early printk buffer (use of memblock_alloc). It disappears with commit 7ff9554bb578ba02166071d2d487b7fc7d860d62 (3.4 => 3.5) that does a refactoring of printk buffer management. In log_prefix(), the access to "p[0]", "p[1]", "p[2]" or "simple_strtoul(&p[1], &endp, 10)" may cause a buffer overflow as this function is called from call_console_drivers by passing "&LOG_BUF(cur_index)" where the index must be masked to do not exceed the buffer's boundary. The trick is to prepare in call_console_drivers() a buffer with the necessary data (PRI field of syslog message) to be safely evaluated in log_prefix(). This patch can be applied to stable kernel branches 3.0.y, 3.2.y and 3.4.y. Without this patch, one can freeze a server running this loop from shell : $ export DUMMY=`cat /dev/urandom | tr -dc '12345AZERTYUIOPQSDFGHJKLMWXCVBNazertyuiopqsdfghjklmwxcvbn' | head -c255` $ while true do ; echo $DUMMY > /dev/kmsg ; done The "server freeze" depends on where memblock_alloc does allocate printk buffer : if the buffer overflow is inside another kernel allocation the problem may not be revealed, else the server may hangs up. Signed-off-by: Alexandre SIMON <Alexandre.Simon@univ-lorraine.fr> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-02-06efi: Make 'efi_enabled' a function to query EFI facilitiesMatt Fleming
commit 83e68189745ad931c2afd45d8ee3303929233e7f upstream. Originally 'efi_enabled' indicated whether a kernel was booted from EFI firmware. Over time its semantics have changed, and it now indicates whether or not we are booted on an EFI machine with bit-native firmware, e.g. 64-bit kernel with 64-bit firmware. The immediate motivation for this patch is the bug report at, https://bugs.launchpad.net/ubuntu-cdimage/+bug/1040557 which details how running a platform driver on an EFI machine that is designed to run under BIOS can cause the machine to become bricked. Also, the following report, https://bugzilla.kernel.org/show_bug.cgi?id=47121 details how running said driver can also cause Machine Check Exceptions. Drivers need a new means of detecting whether they're running on an EFI machine, as sadly the expression, if (!efi_enabled) hasn't been a sufficient condition for quite some time. Users actually want to query 'efi_enabled' for different reasons - what they really want access to is the list of available EFI facilities. For instance, the x86 reboot code needs to know whether it can invoke the ResetSystem() function provided by the EFI runtime services, while the ACPI OSL code wants to know whether the EFI config tables were mapped successfully. There are also checks in some of the platform driver code to simply see if they're running on an EFI machine (which would make it a bad idea to do BIOS-y things). This patch is a prereq for the samsung-laptop fix patch. Cc: David Airlie <airlied@linux.ie> Cc: Corentin Chary <corentincj@iksaif.net> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Olof Johansson <olof@lixom.net> Cc: Peter Jones <pjones@redhat.com> Cc: Colin Ian King <colin.king@canonical.com> Cc: Steve Langasek <steve.langasek@canonical.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Konrad Rzeszutek Wilk <konrad@kernel.org> Cc: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> [bwh: Backported to 3.2: - Adjust context (a lot) - Add efi_is_native() function from commit 5189c2a7c776 ('x86: efi: Turn off efi_enabled after setup on mixed fw/kernel') - Make efi_init() bail out when booted non-native, as it would previously not be called in this case - Drop inapplicable changes to start_kernel()] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-02-068250/16?50: Add support for Broadcom TruManage redirected serial portStephen Hurd
commit ebebd49a8eab5e9aa1b1f8f1614ccc3c2120f886 upstream. Add support for the UART device present in Broadcom TruManage capable NetXtreme chips (ie: 5761m 5762, and 5725). This implementation has a hidden transmit FIFO, so running in single-byte interrupt mode results in too many interrupts. The UART_CAP_HFIFO capability was added to track this. It continues to reload the THR as long as the THRE and TSRE bits are set in the LSR up to a specified limit (1024 is used here). Signed-off-by: Stephen Hurd <shurd@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [bwh: Backported to 3.2: - Adjust filenames - Adjust context - First available port type number is 22 not 24 - s/uart_8250_port/uart_port/ - Use serial_in() not serial_port_in()] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-16tcp: implement RFC 5961 4.2Eric Dumazet
[ Upstream commit 0c24604b68fc7810d429d6c3657b6f148270e528 ] Implement the RFC 5691 mitigation against Blind Reset attack using SYN bit. Section 4.2 of RFC 5961 advises to send a Challenge ACK and drop incoming packet, instead of resetting the session. Add a new SNMP counter to count number of challenge acks sent in response to SYN packets. (netstat -s | grep TCPSYNChallenge) Remove obsolete TCPAbortOnSyn, since we no longer abort a TCP session because of a SYN flag. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Kiran Kumar Kella <kkiran@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-16tcp: implement RFC 5961 3.2Eric Dumazet
[ Upstream commit 282f23c6ee343126156dd41218b22ece96d747e3 ] Implement the RFC 5691 mitigation against Blind Reset attack using RST bit. Idea is to validate incoming RST sequence, to match RCV.NXT value, instead of previouly accepted window : (RCV.NXT <= SEG.SEQ < RCV.NXT+RCV.WND) If sequence is in window but not an exact match, send a "challenge ACK", so that the other part can resend an RST with the appropriate sequence. Add a new sysctl, tcp_challenge_ack_limit, to limit number of challenge ACK sent per second. Add a new SNMP counter to count number of challenge acks sent. (netstat -s | grep TCPChallengeACK) Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Kiran Kumar Kella <kkiran@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-16inet: Fix kmemleak in tcp_v4/6_syn_recv_sock and dccp_v4/6_request_recv_sockChristoph Paasch
[ Upstream commit e337e24d6624e74a558aa69071e112a65f7b5758 ] If in either of the above functions inet_csk_route_child_sock() or __inet_inherit_port() fails, the newsk will not be freed: unreferenced object 0xffff88022e8a92c0 (size 1592): comm "softirq", pid 0, jiffies 4294946244 (age 726.160s) hex dump (first 32 bytes): 0a 01 01 01 0a 01 01 02 00 00 00 00 a7 cc 16 00 ................ 02 00 03 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff8153d190>] kmemleak_alloc+0x21/0x3e [<ffffffff810ab3e7>] kmem_cache_alloc+0xb5/0xc5 [<ffffffff8149b65b>] sk_prot_alloc.isra.53+0x2b/0xcd [<ffffffff8149b784>] sk_clone_lock+0x16/0x21e [<ffffffff814d711a>] inet_csk_clone_lock+0x10/0x7b [<ffffffff814ebbc3>] tcp_create_openreq_child+0x21/0x481 [<ffffffff814e8fa5>] tcp_v4_syn_recv_sock+0x3a/0x23b [<ffffffff814ec5ba>] tcp_check_req+0x29f/0x416 [<ffffffff814e8e10>] tcp_v4_do_rcv+0x161/0x2bc [<ffffffff814eb917>] tcp_v4_rcv+0x6c9/0x701 [<ffffffff814cea9f>] ip_local_deliver_finish+0x70/0xc4 [<ffffffff814cec20>] ip_local_deliver+0x4e/0x7f [<ffffffff814ce9f8>] ip_rcv_finish+0x1fc/0x233 [<ffffffff814cee68>] ip_rcv+0x217/0x267 [<ffffffff814a7bbe>] __netif_receive_skb+0x49e/0x553 [<ffffffff814a7cc3>] netif_receive_skb+0x50/0x82 This happens, because sk_clone_lock initializes sk_refcnt to 2, and thus a single sock_put() is not enough to free the memory. Additionally, things like xfrm, memcg, cookie_values,... may have been initialized. We have to free them properly. This is fixed by forcing a call to tcp_done(), ending up in inet_csk_destroy_sock, doing the final sock_put(). tcp_done() is necessary, because it ends up doing all the cleanup on xfrm, memcg, cookie_values, xfrm,... Before calling tcp_done, we have to set the socket to SOCK_DEAD, to force it entering inet_csk_destroy_sock. To avoid the warning in inet_csk_destroy_sock, inet_num has to be set to 0. As inet_csk_destroy_sock does a dec on orphan_count, we first have to increase it. Calling tcp_done() allows us to remove the calls to tcp_clear_xmit_timer() and tcp_cleanup_congestion_control(). A similar approach is taken for dccp by calling dccp_done(). This is in the kernel since 093d282321 (tproxy: fix hash locking issue when using port redirection in __inet_inherit_port()), thus since version >= 2.6.37. Signed-off-by: Christoph Paasch <christoph.paasch@uclouvain.be> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-16ftrace: Do not function trace inlined functionsSteven Rostedt
commit 45959ee7aa645815a5ce303a0ea1e48a21e67c6a upstream. When gcc inlines a function, it does not mark it with the mcount prologue, which in turn means that inlined functions are not traced by the function tracer. But if CONFIG_OPTIMIZE_INLINING is set, then gcc is allowed not to inline a function that is marked inline. Depending on the options and the compiler, a function may or may not be traced by the function tracer, depending on whether gcc decides to inline a function or not. This has caused several problems in the pass becaues gcc is not always consistent with what it decides to inline between different gcc versions. Some places should not be traced (like paravirt native_* functions) and these are mostly marked as inline. When gcc decides not to inline the function, and if that function should not be traced, then the ftrace function tracer will suddenly break when it use to work fine. This becomes even harder to debug when different versions of gcc will not inline that function, making the same kernel and config work for some gcc versions and not work for others. By making all functions marked inline to not be traced will remove the ambiguity that gcc adds when it comes to tracing functions marked inline. All gcc versions will be consistent with what functions are traced and having volatile working code will be removed. Note, only the inline macro when CONFIG_OPTIMIZE_INLINING is set needs to have notrace added, as the attribute __always_inline will force the function to be inlined and then not traced. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-16mac80211: introduce IEEE80211_HW_TEARDOWN_AGGR_ON_BAR_FAILStanislaw Gruszka
commit 5b632fe85ec82e5c43740b52e74c66df50a37db3 upstream. Commit f0425beda4d404a6e751439b562100b902ba9c98 "mac80211: retry sending failed BAR frames later instead of tearing down aggr" caused regression on rt2x00 hardware (connection hangs). This regression was fixed by commit be03d4a45c09ee5100d3aaaedd087f19bc20d01 "rt2x00: Don't let mac80211 send a BAR when an AMPDU subframe fails". But the latter commit caused yet another problem reported in https://bugzilla.kernel.org/show_bug.cgi?id=42828#c22 After long discussion in this thread: http://mid.gmane.org/20121018075615.GA18212@redhat.com and testing various alternative solutions, which failed on one or other setup, we have no other good fix for the issues like just revert both mentioned earlier commits. To do not affect other hardware which benefit from commit f0425beda4d404a6e751439b562100b902ba9c98, instead of reverting it, introduce flag that when used will restore mac80211 behaviour before the commit. Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> [replaced link with mid.gmane.org that has message-id] Signed-off-by: Johannes Berg <johannes.berg@intel.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-16mm: limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPTMichal Hocko
commit 53a59fc67f97374758e63a9c785891ec62324c81 upstream. Since commit e303297e6c3a ("mm: extended batches for generic mmu_gather") we are batching pages to be freed until either tlb_next_batch cannot allocate a new batch or we are done. This works just fine most of the time but we can get in troubles with non-preemptible kernel (CONFIG_PREEMPT_NONE or CONFIG_PREEMPT_VOLUNTARY) on large machines where too aggressive batching might lead to soft lockups during process exit path (exit_mmap) because there are no scheduling points down the free_pages_and_swap_cache path and so the freeing can take long enough to trigger the soft lockup. The lockup is harmless except when the system is setup to panic on softlockup which is not that unusual. The simplest way to work around this issue is to limit the maximum number of batches in a single mmu_gather. 10k of collected pages should be safe to prevent from soft lockups (we would have 2ms for one) even if they are all freed without an explicit scheduling point. This patch doesn't add any new explicit scheduling points because it relies on zap_pmd_range during page tables zapping which calls cond_resched per PMD. The following lockup has been reported for 3.0 kernel with a huge process (in order of hundreds gigs but I do know any more details). BUG: soft lockup - CPU#56 stuck for 22s! [kernel:31053] Modules linked in: af_packet nfs lockd fscache auth_rpcgss nfs_acl sunrpc mptctl mptbase autofs4 binfmt_misc dm_round_robin dm_multipath bonding cpufreq_conservative cpufreq_userspace cpufreq_powersave pcc_cpufreq mperf microcode fuse loop osst sg sd_mod crc_t10dif st qla2xxx scsi_transport_fc scsi_tgt netxen_nic i7core_edac iTCO_wdt joydev e1000e serio_raw pcspkr edac_core iTCO_vendor_support acpi_power_meter rtc_cmos hpwdt hpilo button container usbhid hid dm_mirror dm_region_hash dm_log linear uhci_hcd ehci_hcd usbcore usb_common scsi_dh_emc scsi_dh_alua scsi_dh_hp_sw scsi_dh_rdac scsi_dh dm_snapshot pcnet32 mii edd dm_mod raid1 ext3 mbcache jbd fan thermal processor thermal_sys hwmon cciss scsi_mod Supported: Yes CPU 56 Pid: 31053, comm: kernel Not tainted 3.0.31-0.9-default #1 HP ProLiant DL580 G7 RIP: 0010: _raw_spin_unlock_irqrestore+0x8/0x10 RSP: 0018:ffff883ec1037af0 EFLAGS: 00000206 RAX: 0000000000000e00 RBX: ffffea01a0817e28 RCX: ffff88803ffd9e80 RDX: 0000000000000200 RSI: 0000000000000206 RDI: 0000000000000206 RBP: 0000000000000002 R08: 0000000000000001 R09: ffff887ec724a400 R10: 0000000000000000 R11: dead000000200200 R12: ffffffff8144c26e R13: 0000000000000030 R14: 0000000000000297 R15: 000000000000000e FS: 00007ed834282700(0000) GS:ffff88c03f200000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 000000000068b240 CR3: 0000003ec13c5000 CR4: 00000000000006e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process kernel (pid: 31053, threadinfo ffff883ec1036000, task ffff883ebd5d4100) Call Trace: release_pages+0xc5/0x260 free_pages_and_swap_cache+0x9d/0xc0 tlb_flush_mmu+0x5c/0x80 tlb_finish_mmu+0xe/0x50 exit_mmap+0xbd/0x120 mmput+0x49/0x120 exit_mm+0x122/0x160 do_exit+0x17a/0x430 do_group_exit+0x3d/0xb0 get_signal_to_deliver+0x247/0x480 do_signal+0x71/0x1b0 do_notify_resume+0x98/0xb0 int_signal+0x12/0x17 DWARF2 unwinder stuck at int_signal+0x12/0x17 Signed-off-by: Michal Hocko <mhocko@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-16mm: Fix PageHead when !CONFIG_PAGEFLAGS_EXTENDEDChristoffer Dall
commit ad4b3fb7ff9940bcdb1e4cd62bd189d10fa636ba upstream. Unfortunately with !CONFIG_PAGEFLAGS_EXTENDED, (!PageHead) is false, and (PageHead) is true, for tail pages. If this is indeed the intended behavior, which I doubt because it breaks cache cleaning on some ARM systems, then the nomenclature is highly problematic. This patch makes sure PageHead is only true for head pages and PageTail is only true for tail pages, and neither is true for non-compound pages. [ This buglet seems ancient - seems to have been introduced back in Apr 2008 in commit 6a1e7f777f61: "pageflags: convert to the use of new macros". And the reason nobody noticed is because the PageHead() tests are almost all about just sanity-checking, and only used on pages that are actual page heads. The fact that the old code returned true for tail pages too was thus not really noticeable. - Linus ] Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu> Acked-by: Andrea Arcangeli <aarcange@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Will Deacon <Will.Deacon@arm.com> Cc: Steve Capper <Steve.Capper@arm.com> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-16PCI: Reduce Ricoh 0xe822 SD card reader base clock frequency to 50MHzAndy Lutomirski
commit 812089e01b9f65f90fc8fc670d8cce72a0e01fbb upstream. Otherwise it fails like this on cards like the Transcend 16GB SDHC card: mmc0: new SDHC card at address b368 mmcblk0: mmc0:b368 SDC 15.0 GiB mmcblk0: error -110 sending status command, retrying mmcblk0: error -84 transferring data, sector 0, nr 8, cmd response 0x900, card status 0xb0 Tested on my Lenovo x200 laptop. [bhelgaas: changelog] Signed-off-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Chris Ball <cjb@laptop.org> CC: Manoj Iyer <manoj.iyer@canonical.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-03hpsa: gen8plus Smart Array IDsMike Miller
commit fe0c9610bb68dd0aad1017456f5e3c31264d70c2 upstream. Signed-off-by: James Bottomley <JBottomley@Parallels.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-03exec: do not leave bprm->interp on stackKees Cook
commit b66c5984017533316fd1951770302649baf1aa33 upstream. If a series of scripts are executed, each triggering module loading via unprintable bytes in the script header, kernel stack contents can leak into the command line. Normally execution of binfmt_script and binfmt_misc happens recursively. However, when modules are enabled, and unprintable bytes exist in the bprm->buf, execution will restart after attempting to load matching binfmt modules. Unfortunately, the logic in binfmt_script and binfmt_misc does not expect to get restarted. They leave bprm->interp pointing to their local stack. This means on restart bprm->interp is left pointing into unused stack memory which can then be copied into the userspace argv areas. After additional study, it seems that both recursion and restart remains the desirable way to handle exec with scripts, misc, and modules. As such, we need to protect the changes to interp. This changes the logic to require allocation for any changes to the bprm->interp. To avoid adding a new kmalloc to every exec, the default value is left as-is. Only when passing through binfmt_script or binfmt_misc does an allocation take place. For a proof of concept, see DoTest.sh from: http://www.halfdog.net/Security/2012/LinuxKernelBinfmtScriptStackDataDisclosure/ Signed-off-by: Kees Cook <keescook@chromium.org> Cc: halfdog <me@halfdog.net> Cc: P J P <ppandit@redhat.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-03mm: add kmap_to_page()Ben Hutchings
This is extracted from Mel Gorman's commit 5a178119b0fb ('mm: add support for direct_IO to highmem pages') upstream. Required to backport commit b9cdc88df8e6 ('virtio: 9p: correctly pass physical address to userspace for high pages'). Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-03freezer: add missing mb's to freezer_count() and freezer_should_skip()Tejun Heo
commit dd67d32dbc5de299d70cc9e10c6c1e29ffa56b92 upstream. A task is considered frozen enough between freezer_do_not_count() and freezer_count() and freezers use freezer_should_skip() to test this condition. This supposedly works because freezer_count() always calls try_to_freezer() after clearing %PF_FREEZER_SKIP. However, there currently is nothing which guarantees that freezer_count() sees %true freezing() after clearing %PF_FREEZER_SKIP when freezing is in progress, and vice-versa. A task can escape the freezing condition in effect by freezer_count() seeing !freezing() and freezer_should_skip() seeing %PF_FREEZER_SKIP. This patch adds smp_mb()'s to freezer_count() and freezer_should_skip() such that either %true freezing() is visible to freezer_count() or !PF_FREEZER_SKIP is visible to freezer_should_skip(). Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> [bwh: Backported to 3.2: - Adjust context and indentation - freezer_do_not_count() and freezer_count() are no-ops for kernel tasks] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-03cgroup: cgroup_subsys->fork() should be called after the task is added to ↵Tejun Heo
css_set commit 5edee61edeaaebafe584f8fb7074c1ef4658596b upstream. cgroup core has a bug which violates a basic rule about event notifications - when a new entity needs to be added, you add that to the notification list first and then make the new entity conform to the current state. If done in the reverse order, an event happening inbetween will be lost. cgroup_subsys->fork() is invoked way before the new task is added to the css_set. Currently, cgroup_freezer is the only user of ->fork() and uses it to make new tasks conform to the current state of the freezer. If FROZEN state is requested while fork is in progress between cgroup_fork_callbacks() and cgroup_post_fork(), the child could escape freezing - the cgroup isn't frozen when ->fork() is called and the freezer couldn't see the new task on the css_set. This patch moves cgroup_subsys->fork() invocation to cgroup_post_fork() after the new task is added to the css_set. cgroup_fork_callbacks() is removed. Because now a task may be migrated during cgroup_subsys->fork(), freezer_fork() is updated so that it adheres to the usual RCU locking and the rather pointless comment on why locking can be different there is removed (if it doesn't make anything simpler, why even bother?). Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> [bwh: Backported to 3.2: - Adjust context - Iterate over first CGROUP_BUILTIN_SUBSYS_COUNT elements of subsys - cgroup_subsys::fork takes cgroup_subsys pointer as first parameter] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2013-01-03tmpfs: fix shared mempolicy leakMel Gorman
commit 18a2f371f5edf41810f6469cb9be39931ef9deb9 upstream. This fixes a regression in 3.7-rc, which has since gone into stable. Commit 00442ad04a5e ("mempolicy: fix a memory corruption by refcount imbalance in alloc_pages_vma()") changed get_vma_policy() to raise the refcount on a shmem shared mempolicy; whereas shmem_alloc_page() went on expecting alloc_page_vma() to drop the refcount it had acquired. This deserves a rework: but for now fix the leak in shmem_alloc_page(). Hugh: shmem_swapin() did not need a fix, but surely it's clearer to use the same refcounting there as in shmem_alloc_page(), delete its onstack mempolicy, and the strange mpol_cond_copy() and __mpol_cond_copy() - those were invented to let swapin_readahead() make an unknown number of calls to alloc_pages_vma() with one mempolicy; but since 00442ad04a5e, alloc_pages_vma() has kept refcount in balance, so now no problem. Reported-and-tested-by: Tommi Rantala <tt.rantala@gmail.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Hugh Dickins <hughd@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2012-12-06ptp: update adjfreq callback descriptionJacob Keller
commit 87f4d7c1d36f44b0822053b7e5dedc31fdd0ab99 upstream. This patch updates the adjfreq callback description to include a note that the delta in ppb is always relative to the base frequency, and not to the current frequency of the hardware clock. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> CC: Richard Cochran <richard.cochran@gmail.com> CC: John Stultz <john.stultz@linaro.org> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2012-11-16xen/mmu: Use Xen specific TLB flush instead of the generic one.Konrad Rzeszutek Wilk
commit 95a7d76897c1e7243d4137037c66d15cbf2cce76 upstream. As Mukesh explained it, the MMUEXT_TLB_FLUSH_ALL allows the hypervisor to do a TLB flush on all active vCPUs. If instead we were using the generic one (which ends up being xen_flush_tlb) we end up making the MMUEXT_TLB_FLUSH_LOCAL hypercall. But before we make that hypercall the kernel will IPI all of the vCPUs (even those that were asleep from the hypervisor perspective). The end result is that we needlessly wake them up and do a TLB flush when we can just let the hypervisor do it correctly. This patch gives around 50% speed improvement when migrating idle guest's from one host to another. Oracle-bug: 14630170 Tested-by: Jingjie Jiang <jingjie.jiang@oracle.com> Suggested-by: Mukesh Rathor <mukesh.rathor@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2012-11-16mac80211: verify that skb data is presentJohannes Berg
commit 9b395bc3be1cebf0144a127c7e67d56dbdac0930 upstream. A number of places in the mesh code don't check that the frame data is present and in the skb header when trying to access. Add those checks and the necessary pskb_may_pull() calls. This prevents accessing data that doesn't actually exist. To do this, export ieee80211_get_mesh_hdrlen() to be able to use it in mac80211. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>