aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/s390
AgeCommit message (Collapse)Author
2019-09-12scsi: zfcp: fix to prevent port_remove with pure auto scan LUNs (only sdevs)Steffen Maier
commit ef4021fe5fd77ced0323cede27979d80a56211ca upstream. When the user tries to remove a zfcp port via sysfs, we only rejected it if there are zfcp unit children under the port. With purely automatically scanned LUNs there are no zfcp units but only SCSI devices. In such cases, the port_remove erroneously continued. We close the port and this implicitly closes all LUNs under the port. The SCSI devices survive with their private zfcp_scsi_dev still holding a reference to the "removed" zfcp_port (still allocated but invisible in sysfs) [zfcp_get_port_by_wwpn in zfcp_scsi_slave_alloc]. This is not a problem as long as the fc_rport stays blocked. Once (auto) port scan brings back the removed port, we unblock its fc_rport again by design. However, there is no mechanism that would recover (open) the LUNs under the port (no "ersfs_3" without zfcp_unit [zfcp_erp_strategy_followup_success]). Any pending or new I/O to such LUN leads to repeated: Done: NEEDS_RETRY Result: hostbyte=DID_IMM_RETRY driverbyte=DRIVER_OK See also v4.10 commit 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN recovery"). Even a manual LUN recovery (echo 0 > /sys/bus/scsi/devices/H:C:T:L/zfcp_failed) does not help, as the LUN links to the old "removed" port which remains to lack ZFCP_STATUS_COMMON_RUNNING [zfcp_erp_required_act]. The only workaround is to first ensure that the fc_rport is blocked (e.g. port_remove again in case it was re-discovered by (auto) port scan), then delete the SCSI devices, and finally re-discover by (auto) port scan. The port scan includes an fc_rport unblock, which in turn triggers a new scan on the scsi target to freshly get new pure auto scan LUNs. Fix this by rejecting port_remove also if there are SCSI devices (even without any zfcp_unit) under this port. Re-use mechanics from v3.7 commit d99b601b6338 ("[SCSI] zfcp: restore refcount check on port_remove"). However, we have to give up zfcp_sysfs_port_units_mutex earlier in unit_add to prevent a deadlock with scsi_host scan taking shost->scan_mutex first and then zfcp_sysfs_port_units_mutex now in our zfcp_scsi_slave_alloc(). Signed-off-by: Steffen Maier <maier@linux.ibm.com> Fixes: b62a8d9b45b9 ("[SCSI] zfcp: Use SCSI device data zfcp scsi dev instead of zfcp unit") Fixes: f8210e34887e ("[SCSI] zfcp: Allow midlayer to scan for LUNs when running in NPIV mode") Cc: <stable@vger.kernel.org> #2.6.37+ Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-12scsi: zfcp: fix missing zfcp_port reference put on -EBUSY from port_removeSteffen Maier
commit d27e5e07f9c49bf2a6a4ef254ce531c1b4fb5a38 upstream. With this early return due to zfcp_unit child(ren), we don't use the zfcp_port reference from the earlier zfcp_get_port_by_wwpn() anymore and need to put it. Signed-off-by: Steffen Maier <maier@linux.ibm.com> Fixes: d99b601b6338 ("[SCSI] zfcp: restore refcount check on port_remove") Cc: <stable@vger.kernel.org> #3.7+ Reviewed-by: Jens Remus <jremus@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-08-17vfio-ccw: Prevent quiesce function going into an infinite loopFarhan Ali
commit d1ffa760d22aa1d8190478e5ef555c59a771db27 upstream. The quiesce function calls cio_cancel_halt_clear() and if we get an -EBUSY we go into a loop where we: - wait for any interrupts - flush all I/O in the workqueue - retry cio_cancel_halt_clear During the period where we are waiting for interrupts or flushing all I/O, the channel subsystem could have completed a halt/clear action and turned off the corresponding activity control bits in the subchannel status word. This means the next time we call cio_cancel_halt_clear(), we will again start by calling cancel subchannel and so we can be stuck between calling cancel and halt forever. Rather than calling cio_cancel_halt_clear() immediately after waiting, let's try to disable the subchannel. If we succeed in disabling the subchannel then we know nothing else can happen with the device. Suggested-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Farhan Ali <alifm@linux.ibm.com> Message-Id: <4d5a4b98ab1b41ac6131b5c36de18b76c5d66898.1555449329.git.alifm@linux.ibm.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> Acked-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-08-17s390: cio: fix cio_irb declarationArnd Bergmann
commit e91012ee855ad9f5ef2ab106a3de51db93fe4d0c upstream. clang points out that the declaration of cio_irb does not match the definition exactly, it is missing the alignment attribute: ../drivers/s390/cio/cio.c:50:1: warning: section does not match previous declaration [-Wsection] DEFINE_PER_CPU_ALIGNED(struct irb, cio_irb); ^ ../include/linux/percpu-defs.h:150:2: note: expanded from macro 'DEFINE_PER_CPU_ALIGNED' DEFINE_PER_CPU_SECTION(type, name, PER_CPU_ALIGNED_SECTION) \ ^ ../include/linux/percpu-defs.h:93:9: note: expanded from macro 'DEFINE_PER_CPU_SECTION' extern __PCPU_ATTRS(sec) __typeof__(type) name; \ ^ ../include/linux/percpu-defs.h:49:26: note: expanded from macro '__PCPU_ATTRS' __percpu __attribute__((section(PER_CPU_BASE_SECTION sec))) \ ^ ../drivers/s390/cio/cio.h:118:1: note: previous attribute is here DECLARE_PER_CPU(struct irb, cio_irb); ^ ../include/linux/percpu-defs.h:111:2: note: expanded from macro 'DECLARE_PER_CPU' DECLARE_PER_CPU_SECTION(type, name, "") ^ ../include/linux/percpu-defs.h:87:9: note: expanded from macro 'DECLARE_PER_CPU_SECTION' extern __PCPU_ATTRS(sec) __typeof__(type) name ^ ../include/linux/percpu-defs.h:49:26: note: expanded from macro '__PCPU_ATTRS' __percpu __attribute__((section(PER_CPU_BASE_SECTION sec))) \ ^ Use DECLARE_PER_CPU_ALIGNED() here, to make the two match. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-08-17s390: zcrypt: initialize variables before_useArnd Bergmann
commit 913140e221567b3ecd21b4242257a7e3fa279026 upstream. The 'func_code' variable gets printed in debug statements without a prior initialization in multiple functions, as reported when building with clang: drivers/s390/crypto/zcrypt_api.c:659:6: warning: variable 'func_code' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (mex->outputdatalength < mex->inputdatalength) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/s390/crypto/zcrypt_api.c:725:29: note: uninitialized use occurs here trace_s390_zcrypt_rep(mex, func_code, rc, ^~~~~~~~~ drivers/s390/crypto/zcrypt_api.c:659:2: note: remove the 'if' if its condition is always false if (mex->outputdatalength < mex->inputdatalength) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/s390/crypto/zcrypt_api.c:654:24: note: initialize the variable 'func_code' to silence this warning unsigned int func_code; ^ Add initializations to all affected code paths to shut up the warning and make the warning output consistent. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-08-02vfio-ccw: Release any channel program when releasing/removing vfio-ccw mdevFarhan Ali
commit b49bdc8602b7c9c7a977758bee4125683f73e59f upstream. When releasing the vfio-ccw mdev, we currently do not release any existing channel program and its pinned pages. This can lead to the following warning: [1038876.561565] WARNING: CPU: 2 PID: 144727 at drivers/vfio/vfio_iommu_type1.c:1494 vfio_sanity_check_pfn_list+0x40/0x70 [vfio_iommu_type1] .... 1038876.561921] Call Trace: [1038876.561935] ([<00000009897fb870>] 0x9897fb870) [1038876.561949] [<000003ff8013bf62>] vfio_iommu_type1_detach_group+0xda/0x2f0 [vfio_iommu_type1] [1038876.561965] [<000003ff8007b634>] __vfio_group_unset_container+0x64/0x190 [vfio] [1038876.561978] [<000003ff8007b87e>] vfio_group_put_external_user+0x26/0x38 [vfio] [1038876.562024] [<000003ff806fc608>] kvm_vfio_group_put_external_user+0x40/0x60 [kvm] [1038876.562045] [<000003ff806fcb9e>] kvm_vfio_destroy+0x5e/0xd0 [kvm] [1038876.562065] [<000003ff806f63fc>] kvm_put_kvm+0x2a4/0x3d0 [kvm] [1038876.562083] [<000003ff806f655e>] kvm_vm_release+0x36/0x48 [kvm] [1038876.562098] [<00000000003c2dc4>] __fput+0x144/0x228 [1038876.562113] [<000000000016ee82>] task_work_run+0x8a/0xd8 [1038876.562125] [<000000000014c7a8>] do_exit+0x5d8/0xd90 [1038876.562140] [<000000000014d084>] do_group_exit+0xc4/0xc8 [1038876.562155] [<000000000015c046>] get_signal+0x9ae/0xa68 [1038876.562169] [<0000000000108d66>] do_signal+0x66/0x768 [1038876.562185] [<0000000000b9e37e>] system_call+0x1ea/0x2d8 [1038876.562195] 2 locks held by qemu-system-s39/144727: [1038876.562205] #0: 00000000537abaf9 (&container->group_lock){++++}, at: __vfio_group_unset_container+0x3c/0x190 [vfio] [1038876.562230] #1: 00000000670008b5 (&iommu->lock){+.+.}, at: vfio_iommu_type1_detach_group+0x36/0x2f0 [vfio_iommu_type1] [1038876.562250] Last Breaking-Event-Address: [1038876.562262] [<000003ff8013aa24>] vfio_sanity_check_pfn_list+0x3c/0x70 [vfio_iommu_type1] [1038876.562272] irq event stamp: 4236481 [1038876.562287] hardirqs last enabled at (4236489): [<00000000001cee7a>] console_unlock+0x6d2/0x740 [1038876.562299] hardirqs last disabled at (4236496): [<00000000001ce87e>] console_unlock+0xd6/0x740 [1038876.562311] softirqs last enabled at (4234162): [<0000000000b9fa1e>] __do_softirq+0x556/0x598 [1038876.562325] softirqs last disabled at (4234153): [<000000000014e4cc>] irq_exit+0xac/0x108 [1038876.562337] ---[ end trace 6c96d467b1c3ca06 ]--- Similarly we do not free the channel program when we are removing the vfio-ccw device. Let's fix this by resetting the device and freeing the channel program and pinned pages in the release path. For the remove path we can just quiesce the device, since in the remove path the mediated device is going away for good and so we don't need to do a full reset. Signed-off-by: Farhan Ali <alifm@linux.ibm.com> Message-Id: <ae9f20dc8873f2027f7b3c5d2aaa0bdfe06850b8.1554756534.git.alifm@linux.ibm.com> Acked-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-08-02vfio-ccw: Do not call flush_workqueue while holding the spinlockFarhan Ali
commit cea5dde42a83b5f0a039da672f8686455936b8d8 upstream. Currently we call flush_workqueue while holding the subchannel spinlock. But flush_workqueue function can go to sleep, so do not call the function while holding the spinlock. Fixes the following bug: [ 285.203430] BUG: scheduling while atomic: bash/14193/0x00000002 [ 285.203434] INFO: lockdep is turned off. .... [ 285.203485] Preemption disabled at: [ 285.203488] [<000003ff80243e5c>] vfio_ccw_sch_quiesce+0xbc/0x120 [vfio_ccw] [ 285.203496] CPU: 7 PID: 14193 Comm: bash Tainted: G W .... [ 285.203504] Call Trace: [ 285.203510] ([<0000000000113772>] show_stack+0x82/0xd0) [ 285.203514] [<0000000000b7a102>] dump_stack+0x92/0xd0 [ 285.203518] [<000000000017b8be>] __schedule_bug+0xde/0xf8 [ 285.203524] [<0000000000b95b5a>] __schedule+0x7a/0xc38 [ 285.203528] [<0000000000b9678a>] schedule+0x72/0xb0 [ 285.203533] [<0000000000b9bfbc>] schedule_timeout+0x34/0x528 [ 285.203538] [<0000000000b97608>] wait_for_common+0x118/0x1b0 [ 285.203544] [<0000000000166d6a>] flush_workqueue+0x182/0x548 [ 285.203550] [<000003ff80243e6e>] vfio_ccw_sch_quiesce+0xce/0x120 [vfio_ccw] [ 285.203556] [<000003ff80245278>] vfio_ccw_mdev_reset+0x38/0x70 [vfio_ccw] [ 285.203562] [<000003ff802458b0>] vfio_ccw_mdev_remove+0x40/0x78 [vfio_ccw] [ 285.203567] [<000003ff801a499c>] mdev_device_remove_ops+0x3c/0x80 [mdev] [ 285.203573] [<000003ff801a4d5c>] mdev_device_remove+0xc4/0x130 [mdev] [ 285.203578] [<000003ff801a5074>] remove_store+0x6c/0xa8 [mdev] [ 285.203582] [<000000000046f494>] kernfs_fop_write+0x14c/0x1f8 [ 285.203588] [<00000000003c1530>] __vfs_write+0x38/0x1a8 [ 285.203593] [<00000000003c187c>] vfs_write+0xb4/0x198 [ 285.203597] [<00000000003c1af2>] ksys_write+0x5a/0xb0 [ 285.203601] [<0000000000b9e270>] system_call+0xdc/0x2d8 Signed-off-by: Farhan Ali <alifm@linux.ibm.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Message-Id: <626bab8bb2958ae132452e1ddaf1b20882ad5a9d.1554756534.git.alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-08-02s390: qeth: address type mismatch warningArnd Bergmann
commit 46b83629dede262315aa82179d105581f11763b6 upstream. clang produces a harmless warning for each use for the qeth_adp_supported macro: drivers/s390/net/qeth_l2_main.c:559:31: warning: implicit conversion from enumeration type 'enum qeth_ipa_setadp_cmd' to different enumeration type 'enum qeth_ipa_funcs' [-Wenum-conversion] if (qeth_adp_supported(card, IPA_SETADP_SET_PROMISC_MODE)) ~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/s390/net/qeth_core.h:179:41: note: expanded from macro 'qeth_adp_supported' qeth_is_ipa_supported(&c->options.adp, f) ~~~~~~~~~~~~~~~~~~~~~ ^ Add a version of this macro that uses the correct types, and remove the unused qeth_adp_enabled() macro that has the same problem. Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-05-30s390: ctcm: fix ctcm_new_device error return codeArnd Bergmann
commit 27b141fc234a3670d21bd742c35d7205d03cbb3a upstream. clang points out that the return code from this function is undefined for one of the error paths: ../drivers/s390/net/ctcm_main.c:1595:7: warning: variable 'result' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (priv->channel[direction] == NULL) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../drivers/s390/net/ctcm_main.c:1638:9: note: uninitialized use occurs here return result; ^~~~~~ ../drivers/s390/net/ctcm_main.c:1595:3: note: remove the 'if' if its condition is always false if (priv->channel[direction] == NULL) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../drivers/s390/net/ctcm_main.c:1539:12: note: initialize the variable 'result' to silence this warning int result; ^ Make it return -ENODEV here, as in the related failure cases. gcc has a known bug in underreporting some of these warnings when it has already eliminated the assignment of the return code based on some earlier optimization step. Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-05-30s390/pkey: add one more argument space for debug feature entryHarald Freudenberger
commit 6b1f16ba730d4c0cda1247568c3a1bf4fa3a2f2f upstream. The debug feature entries have been used with up to 5 arguents (including the pointer to the format string) but there was only space reserved for 4 arguemnts. So now the registration does reserve space for 5 times a long value. This fixes a sometime appearing weired value as the last value of an debug feature entry like this: ... pkey_sec2protkey zcrypt_send_cprb (cardnr=10 domain=12) failed with errno -2143346254 Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reported-by: Christian Rund <Christian.Rund@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-05-30s390/3270: fix lockdep false positive on view->lockMartin Schwidefsky
commit 5712f3301a12c0c3de9cc423484496b0464f2faf upstream. The spinlock in the raw3270_view structure is used by con3270, tty3270 and fs3270 in different ways. For con3270 the lock can be acquired in irq context, for tty3270 and fs3270 the highest context is bh. Lockdep sees the view->lock as a single class and if the 3270 driver is used for the console the following message is generated: WARNING: inconsistent lock state 5.1.0-rc3-05157-g5c168033979d #12 Not tainted -------------------------------- inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage. swapper/0/1 [HC0[0]:SC1[1]:HE1:SE0] takes: (____ptrval____) (&(&view->lock)->rlock){?.-.}, at: tty3270_update+0x7c/0x330 Introduce a lockdep subclass for the view lock to distinguish bh from irq locks. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-05-30s390/dasd: Fix capacity calculation for large volumesPeter Oberparleiter
commit 2cc9637ce825f3a9f51f8f78af7474e9e85bfa5f upstream. The DASD driver incorrectly limits the maximum number of blocks of ECKD DASD volumes to 32 bit numbers. Volumes with a capacity greater than 2^32-1 blocks are incorrectly recognized as smaller volumes. This results in the following volume capacity limits depending on the formatted block size: BLKSIZE MAX_GB MAX_CYL 512 2047 5843492 1024 4095 8676701 2048 8191 13634816 4096 16383 23860929 The same problem occurs when a volume with more than 17895697 cylinders is accessed in raw-track-access mode. Fix this problem by adding an explicit type cast when calculating the maximum number of blocks. Signed-off-by: Peter Oberparleiter <oberpar@linux.ibm.com> Reviewed-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-05-23scsi: zfcp: reduce flood of fcrscn1 trace records on multi-element RSCNSteffen Maier
commit c8206579175c34a2546de8a74262456278a7795a upstream. If an incoming ELS of type RSCN contains more than one element, zfcp suboptimally causes repeated erp trigger NOP trace records for each previously failed port. These could be ports that went away. It loops over each RSCN element, and for each of those in an inner loop over all zfcp_ports. The trigger to recover failed ports should be just the reception of some RSCN, no matter how many elements it has. So we can loop over failed ports separately, and only then loop over each RSCN element to handle the non-failed ports. The call chain was: zfcp_fc_incoming_rscn for (i = 1; i < no_entries; i++) _zfcp_fc_incoming_rscn list_for_each_entry(port, &adapter->port_list, list) if (masked port->d_id match) zfcp_fc_test_link if (!port->d_id) zfcp_erp_port_reopen "fcrscn1" <=== In order the reduce the "flooding" of the REC trace area in such cases, we factor out handling the failed ports to be outside of the entries loop: zfcp_fc_incoming_rscn if (no_entries > 1) <=== list_for_each_entry(port, &adapter->port_list, list) <=== if (!port->d_id) zfcp_erp_port_reopen "fcrscn1" <=== for (i = 1; i < no_entries; i++) _zfcp_fc_incoming_rscn list_for_each_entry(port, &adapter->port_list, list) if (masked port->d_id match) zfcp_fc_test_link Abbreviated example trace records before this code change: Tag : fcrscn1 WWPN : 0x500507630310d327 ERP want : 0x02 ERP need : 0x02 Tag : fcrscn1 WWPN : 0x500507630310d327 ERP want : 0x02 ERP need : 0x00 NOP => superfluous trace record The last trace entry repeats if there are more than 2 RSCN elements. Signed-off-by: Steffen Maier <maier@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Reviewed-by: Jens Remus <jremus@linux.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-05-23s390/qeth: fix race when initializing the IP address tableJulian Wiedmann
commit 7221b727f0079a32aca91f657141e1de564d4b97 upstream. The ucast IP table is utilized by some of the L3-specific sysfs attributes that qeth_l3_create_device_attributes() provides. So initialize the table _before_ registering the attributes. Fixes: ebccc7397e4a ("s390/qeth: add missing hash table initializations") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-04-07scsi: zfcp: fix scsi_eh host reset with port_forced ERP for non-NPIV FCP devicesSteffen Maier
commit 242ec1455151267fe35a0834aa9038e4c4670884 upstream. Suppose more than one non-NPIV FCP device is active on the same channel. Send I/O to storage and have some of the pending I/O run into a SCSI command timeout, e.g. due to bit errors on the fibre. Now the error situation stops. However, we saw FCP requests continue to timeout in the channel. The abort will be successful, but the subsequent TUR fails. Scsi_eh starts. The LUN reset fails. The target reset fails. The host reset only did an FCP device recovery. However, for non-NPIV FCP devices, this does not close and reopen ports on the SAN-side if other non-NPIV FCP device(s) share the same open ports. In order to resolve the continuing FCP request timeouts, we need to explicitly close and reopen ports on the SAN-side. This was missing since the beginning of zfcp in v2.6.0 history commit ea127f975424 ("[PATCH] s390 (7/7): zfcp host adapter."). Note: The FSF requests for forced port reopen could run into FSF request timeouts due to other reasons. This would trigger an internal FCP device recovery. Pending forced port reopen recoveries would get dismissed. So some ports might not get fully reopened during this host reset handler. However, subsequent I/O would trigger the above described escalation and eventually all ports would be forced reopen to resolve any continuing FCP request timeouts due to earlier bit errors. Signed-off-by: Steffen Maier <maier@linux.ibm.com> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Cc: <stable@vger.kernel.org> #3.0+ Reviewed-by: Jens Remus <jremus@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-04-07scsi: zfcp: fix rport unblock if deleted SCSI devices on Scsi_HostSteffen Maier
commit fe67888fc007a76b81e37da23ce5bd8fb95890b0 upstream. An already deleted SCSI device can exist on the Scsi_Host and remain there because something still holds a reference. A new SCSI device with the same H:C:T:L and FCP device, target port WWPN, and FCP LUN can be created. When we try to unblock an rport, we still find the deleted SCSI device and return early because the zfcp_scsi_dev of that SCSI device is not ZFCP_STATUS_COMMON_UNBLOCKED. Hence we miss to unblock the rport, even if the new proper SCSI device would be in good state. Therefore, skip deleted SCSI devices when iterating the sdevs of the shost. [cf. __scsi_device_lookup{_by_target}() or scsi_device_get()] The following abbreviated trace sequence can indicate such problem: Area : REC Tag : ersfs_3 LUN : 0x4045400300000000 WWPN : 0x50050763031bd327 LUN status : 0x40000000 not ZFCP_STATUS_COMMON_UNBLOCKED Ready count : n not incremented yet Running count : 0x00000000 ERP want : 0x01 ERP need : 0xc1 ZFCP_ERP_ACTION_NONE Area : REC Tag : ersfs_3 LUN : 0x4045400300000000 WWPN : 0x50050763031bd327 LUN status : 0x41000000 Ready count : n+1 Running count : 0x00000000 ERP want : 0x01 ERP need : 0x01 ... Area : REC Level : 4 only with increased trace level Tag : ertru_l LUN : 0x4045400300000000 WWPN : 0x50050763031bd327 LUN status : 0x40000000 Request ID : 0x0000000000000000 ERP status : 0x01800000 ERP step : 0x1000 ERP action : 0x01 ERP count : 0x00 NOT followed by a trace record with tag "scpaddy" for WWPN 0x50050763031bd327. Signed-off-by: Steffen Maier <maier@linux.ibm.com> Fixes: 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN recovery") Cc: <stable@vger.kernel.org> #2.6.32+ Reviewed-by: Jens Remus <jremus@linux.ibm.com> Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-04-07vfio: ccw: only free cp on final interruptCornelia Huck
commit 50b7f1b7236bab08ebbbecf90521e84b068d7a17 upstream. When we get an interrupt for a channel program, it is not necessarily the final interrupt; for example, the issuing guest may request an intermediate interrupt by specifying the program-controlled-interrupt flag on a ccw. We must not switch the state to idle if the interrupt is not yet final; even more importantly, we must not free the translated channel program if the interrupt is not yet final, or the host can crash during cp rewind. Fixes: e5f84dbaea59 ("vfio: ccw: return I/O results asynchronously") Cc: stable@vger.kernel.org # v4.12+ Reviewed-by: Eric Farman <farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-03-26s390/virtio: handle find on invalid queue gracefullyHalil Pasic
commit 3438b2c039b4bf26881786a1f3450f016d66ad11 upstream. A queue with a capacity of zero is clearly not a valid virtio queue. Some emulators report zero queue size if queried with an invalid queue index. Instead of crashing in this case let us just return -ENOENT. To make that work properly, let us fix the notifier cleanup logic as well. Cc: stable@vger.kernel.org Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-03-26s390/dasd: fix using offset into zero size array errorStefan Haberland
commit 4a8ef6999bce998fa5813023a9a6b56eea329dba upstream. Dan Carpenter reported the following: The patch 52898025cf7d: "[S390] dasd: security and PSF update patch for EMC CKD ioctl" from Mar 8, 2010, leads to the following static checker warning: drivers/s390/block/dasd_eckd.c:4486 dasd_symm_io() error: using offset into zero size array 'psf_data[]' drivers/s390/block/dasd_eckd.c 4458 /* Copy parms from caller */ 4459 rc = -EFAULT; 4460 if (copy_from_user(&usrparm, argp, sizeof(usrparm))) ^^^^^^^ The user can specify any "usrparm.psf_data_len". They choose zero by mistake. 4461 goto out; 4462 if (is_compat_task()) { 4463 /* Make sure pointers are sane even on 31 bit. */ 4464 rc = -EINVAL; 4465 if ((usrparm.psf_data >> 32) != 0) 4466 goto out; 4467 if ((usrparm.rssd_result >> 32) != 0) 4468 goto out; 4469 usrparm.psf_data &= 0x7fffffffULL; 4470 usrparm.rssd_result &= 0x7fffffffULL; 4471 } 4472 /* alloc I/O data area */ 4473 psf_data = kzalloc(usrparm.psf_data_len, GFP_KERNEL | GFP_DMA); 4474 rssd_result = kzalloc(usrparm.rssd_result_len, GFP_KERNEL | GFP_DMA); 4475 if (!psf_data || !rssd_result) { kzalloc() returns a ZERO_SIZE_PTR (0x16). 4476 rc = -ENOMEM; 4477 goto out_free; 4478 } 4479 4480 /* get syscall header from user space */ 4481 rc = -EFAULT; 4482 if (copy_from_user(psf_data, 4483 (void __user *)(unsigned long) usrparm.psf_data, 4484 usrparm.psf_data_len)) That all works great. 4485 goto out_free; 4486 psf0 = psf_data[0]; 4487 psf1 = psf_data[1]; But now we're assuming that "->psf_data_len" was at least 2 bytes. Fix this by checking the user specified length psf_data_len. Fixes: 52898025cf7d ("[S390] dasd: security and PSF update patch for EMC CKD ioctl") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-03-20s390/qeth: cancel close_dev work before removing a cardJulian Wiedmann
commit c2780c1a3fb724560b1d44f7976e0de17bf153c7 upstream. A card's close_dev work is scheduled on a driver-wide workqueue. If the card is removed and freed while the work is still active, this causes a use-after-free. So make sure that the work is completed before freeing the card. Fixes: 0f54761d167f ("qeth: Support VEPA mode") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-03-20s390/qeth: fix use-after-free in error pathJulian Wiedmann
commit afa0c5904ba16d59b0454f7ee4c807dae350f432 upstream. The error path in qeth_alloc_qdio_buffers() that takes care of cleaning up the Output Queues is buggy. It first frees the queue, but then calls qeth_clear_outq_buffers() with that very queue struct. Make the call to qeth_clear_outq_buffers() part of the free action (in the correct order), and while at it fix the naming of the helper. Fixes: 0da9581ddb0f ("qeth: exploit asynchronous delivery of storage blocks") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Alexandra Winter <wintera@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-03-20s390/qeth: release cmd buffer in error pathsJulian Wiedmann
commit 5065b2dd3e5f9247a6c9d67974bc0472bf561b9d upstream. Whenever we fail before/while starting an IO, make sure to release the IO buffer. Usually qeth_irq() would do this for us, but if the IO doesn't even start we obviously won't get an interrupt for it either. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-03-08s390/zcrypt: fix specification exception on z196 during ap probeHarald Freudenberger
commit 8f9aca0c45322a807a343fc32f95f2500f83b9ae upstream. The older machines don't have the QCI instruction available. With support for up to 256 crypto cards the probing of each card has been extended to check card ids from 0 up to 255. For machines with QCI support there is a filter limiting the range of probed cards. The older machines (z196 and older) don't have this filter and so since support for 256 cards is in the driver all cards are probed. However, these machines also require to have the card id fit into 6 bits. Exceeding this limit results in a specification exception which happens on every kernel startup even when there is no crypto configured and used at all. This fix limits the range of probed crypto cards to 64 if there is no QCI instruction available to obey to the older ap architecture and so fixes the specification exceptions on z196 machines. Cc: stable@vger.kernel.org # v4.17+ Fixes: af4a72276d49 ("s390/zcrypt: Support up to 256 crypto adapters.") Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-03-01s390/zcrypt: improve special ap message cmd handlingHarald Freudenberger
commit be534791011100d204602e2e0496e9e6ce8edf63 upstream. There exist very few ap messages which need to have the 'special' flag enabled. This flag tells the firmware layer to do some pre- and maybe postprocessing. However, it may happen that this special flag is enabled but the firmware is unable to deal with this kind of message and thus returns with reply code 0x41. For example older firmware may not know the newest messages triggered by the zcrypt device driver and thus react with reject and the named reply code. Unfortunately this reply code is not known to the zcrypt error routines and thus default behavior is to switch the ap queue offline. This patch now makes the ap error routine aware of the reply code and so userspace is informed about the bad processing result but the queue is not switched to offline state any more. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-02-19s390/smp: fix CPU hotplug deadlock with CPU rescanGerald Schaefer
commit b7cb707c373094ce4008d4a6ac9b6b366ec52da5 upstream. smp_rescan_cpus() is called without the device_hotplug_lock, which can lead to a dedlock when a new CPU is found and immediately set online by a udev rule. This was observed on an older kernel version, where the cpu_hotplug_begin() loop was still present, and it resulted in hanging chcpu and systemd-udev processes. This specific deadlock will not show on current kernels. However, there may be other possible deadlocks, and since smp_rescan_cpus() can still trigger a CPU hotplug operation, the device_hotplug_lock should be held. For reference, this was the deadlock with the old cpu_hotplug_begin() loop: chcpu (rescan) systemd-udevd echo 1 > /sys/../rescan -> smp_rescan_cpus() -> (*) get_online_cpus() (increases refcount) -> smp_add_present_cpu() (new CPU found) -> register_cpu() -> device_add() -> udev "add" event triggered -----------> udev rule sets CPU online -> echo 1 > /sys/.../online -> lock_device_hotplug_sysfs() (this is missing in rescan path) -> device_online() -> (**) device_lock(new CPU dev) -> cpu_up() -> cpu_hotplug_begin() (loops until refcount == 0) -> deadlock with (*) -> bus_probe_device() -> device_attach() -> device_lock(new CPU dev) -> deadlock with (**) Fix this by taking the device_hotplug_lock in the CPU rescan path. Cc: <stable@vger.kernel.org> Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-01-14scsi: zfcp: fix posting too many status read buffers leading to adapter shutdownSteffen Maier
commit 60a161b7e5b2a252ff0d4c622266a7d8da1120ce upstream. Suppose adapter (open) recovery is between opened QDIO queues and before (the end of) initial posting of status read buffers (SRBs). This time window can be seconds long due to FSF_PROT_HOST_CONNECTION_INITIALIZING causing by design looping with exponential increase sleeps in the function performing exchange config data during recovery [zfcp_erp_adapter_strat_fsf_xconf()]. Recovery triggered by local link up. Suppose an event occurs for which the FCP channel would send an unsolicited notification to zfcp by means of a previously posted SRB. We saw it with local cable pull (link down) in multi-initiator zoning with multiple NPIV-enabled subchannels of the same shared FCP channel. As soon as zfcp_erp_adapter_strategy_open_fsf() starts posting the initial status read buffers from within the adapter's ERP thread, the channel does send an unsolicited notification. Since v2.6.27 commit d26ab06ede83 ("[SCSI] zfcp: receiving an unsolicted status can lead to I/O stall"), zfcp_fsf_status_read_handler() schedules adapter->stat_work to re-fill the just consumed SRB from a work item. Now the ERP thread and the work item post SRBs in parallel. Both contexts call the helper function zfcp_status_read_refill(). The tracking of missing (to be posted / re-filled) SRBs is not thread-safe due to separate atomic_read() and atomic_dec(), in order to depend on posting success. Hence, both contexts can see atomic_read(&adapter->stat_miss) == 1. One of the two contexts posts one too many SRB. Zfcp gets QDIO_ERROR_SLSB_STATE on the output queue (trace tag "qdireq1") leading to zfcp_erp_adapter_shutdown() in zfcp_qdio_handler_error(). An obvious and seemingly clean fix would be to schedule stat_work from the ERP thread and wait for it to finish. This would serialize all SRB re-fills. However, we already have another work item wait on the ERP thread: adapter->scan_work runs zfcp_fc_scan_ports() which calls zfcp_fc_eval_gpn_ft(). The latter calls zfcp_erp_wait() to wait for all the open port recoveries during zfcp auto port scan, but in fact it waits for any pending recovery including an adapter recovery. This approach leads to a deadlock. [see also v3.19 commit 18f87a67e6d6 ("zfcp: auto port scan resiliency"); v2.6.37 commit d3e1088d6873 ("[SCSI] zfcp: No ERP escalation on gpn_ft eval"); v2.6.28 commit fca55b6fb587 ("[SCSI] zfcp: fix deadlock between wq triggered port scan and ERP") fixing v2.6.27 commit c57a39a45a76 ("[SCSI] zfcp: wait until adapter is finished with ERP during auto-port"); v2.6.27 commit cc8c282963bd ("[SCSI] zfcp: Automatically attach remote ports")] Instead make the accounting of missing SRBs atomic for parallel execution in both the ERP thread and adapter->stat_work. Signed-off-by: Steffen Maier <maier@linux.ibm.com> Fixes: d26ab06ede83 ("[SCSI] zfcp: receiving an unsolicted status can lead to I/O stall") Cc: <stable@vger.kernel.org> #2.6.27+ Reviewed-by: Jens Remus <jremus@linux.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2018-12-30s390/cio: Fix cleanup when unsupported IDA format is usedEric Farman
commit b89e242eee8d4cd8261d8d821c62c5d1efc454d0 upstream. Direct returns from within a loop are rude, but it doesn't mean it gets to avoid releasing the memory acquired beforehand. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20181109023937.96105-3-farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Acked-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2018-12-30s390/cio: Fix cleanup of pfn_array alloc failureEric Farman
commit 806212f91c874b24cf9eb4a9f180323671b6c5ed upstream. If pfn_array_alloc fails somehow, we need to release the pfn_array_table that was malloc'd earlier. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20181109023937.96105-2-farman@linux.ibm.com> Acked-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2018-12-30virtio/s390: fix race in ccw_io_helper()Halil Pasic
commit 78b1a52e05c9db11d293342e8d6d8a230a04b4e7 upstream. While ccw_io_helper() seems like intended to be exclusive in a sense that it is supposed to facilitate I/O for at most one thread at any given time, there is actually nothing ensuring that threads won't pile up at vcdev->wait_q. If they do, all threads get woken up and see the status that belongs to some other request than their own. This can lead to bugs. For an example see: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1788432 This race normally does not cause any problems. The operations provided by struct virtio_config_ops are usually invoked in a well defined sequence, normally don't fail, and are normally used quite infrequent too. Yet, if some of the these operations are directly triggered via sysfs attributes, like in the case described by the referenced bug, userspace is given an opportunity to force races by increasing the frequency of the given operations. Let us fix the problem by ensuring, that for each device, we finish processing the previous request before starting with a new one. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reported-by: Colin Ian King <colin.king@canonical.com> Cc: stable@vger.kernel.org Message-Id: <20180925121309.58524-3-pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2018-12-30virtio/s390: avoid race on vcdev->configHalil Pasic
commit 2448a299ec416a80f699940a86f4a6d9a4f643b1 upstream. Currently we have a race on vcdev->config in virtio_ccw_get_config() and in virtio_ccw_set_config(). This normally does not cause problems, as these are usually infrequent operations. However, for some devices writing to/reading from the config space can be triggered through sysfs attributes. For these, userspace can force the race by increasing the frequency. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Cc: stable@vger.kernel.org Message-Id: <20180925121309.58524-2-pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2018-12-15s390/qeth: fix length check in SNMP processingJulian Wiedmann
commit 9a764c1e59684c0358e16ccaafd870629f2cfe67 upstream. The response for a SNMP request can consist of multiple parts, which the cmd callback stages into a kernel buffer until all parts have been received. If the callback detects that the staging buffer provides insufficient space, it bails out with error. This processing is buggy for the first part of the response - while it initially checks for a length of 'data_len', it later copies an additional amount of 'offsetof(struct qeth_snmp_cmd, data)' bytes. Fix the calculation of 'data_len' for the first part of the response. This also nicely cleans up the memcpy code. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2018-12-04s390/qeth: fix HiperSockets snifferJulian Wiedmann
commit bd74a7f9cc033cf4d405788f80292268987dc0c5 upstream. Sniffing mode for L3 HiperSockets requires that no IP addresses are registered with the HW. The preferred way to achieve this is for userspace to delete all the IPs on the interface. But qeth is expected to also tolerate a configuration where that is not the case, by skipping the IP registration when in sniffer mode. Since commit 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback") reworked the IP registration logic in the L3 subdriver, this no longer works. When the qeth device is set online, qeth_l3_recover_ip() now unconditionally registers all unicast addresses from our internal IP table. While we could fix this particular problem by skipping qeth_l3_recover_ip() on a sniffer device, the more future-proof change is to skip the IP address registration at the lowest level. This way we a) catch any future code path that attempts to register an IP address without considering the sniffer scenario, and b) continue to build up our internal IP table, so that if sniffer mode is switched off later we can operate just like normal. Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2018-10-18s390/cio: Fix how vfio-ccw checks pinned pagesEric Farman
commit 24abf2901b18bf941b9f21ea2ce5791f61097ae4 upstream. We have two nested loops to check the entries within the pfn_array_table arrays. But we mistakenly use the outer array as an index in our check, and completely ignore the indexing performed by the inner loop. Cc: stable@vger.kernel.org Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20181002010235.42483-1-farman@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-10s390/qeth: don't dump past end of unknown HW headerJulian Wiedmann
[ Upstream commit 0ac1487c4b2de383b91ecad1be561b8f7a2c15f4 ] For inbound data with an unsupported HW header format, only dump the actual HW header. We have no idea how much payload follows it, and what it contains. Worst case, we dump past the end of the Inbound Buffer and access whatever is located next in memory. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-10s390/qeth: use vzalloc for QUERY OAT bufferWenjia Zhang
[ Upstream commit aec45e857c5538664edb76a60dd452e3265f37d1 ] qeth_query_oat_command() currently allocates the kernel buffer for the SIOC_QETH_QUERY_OAT ioctl with kzalloc. So on systems with fragmented memory, large allocations may fail (eg. the qethqoat tool by default uses 132KB). Solve this issue by using vzalloc, backing the allocation with non-contiguous memory. Signed-off-by: Wenjia Zhang <wenjia@linux.ibm.com> Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-03s390/scm_blk: correct numa_node in scm_blk_dev_setupVasily Gorbik
[ Upstream commit d642d6262f4fcfa5d200ec6e218c17f0c15b3390 ] The numa_node field of the tag_set struct has to be explicitly initialized, otherwise it stays as 0, which is a valid numa node id and cause memory allocation failure if node 0 is offline. Acked-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-03s390/dasd: correct numa_node in dasd_alloc_queueVasily Gorbik
[ Upstream commit b17e3abb0af404cb62ad4ef1a5962f58b06e2b78 ] The numa_node field of the tag_set struct has to be explicitly initialized, otherwise it stays as 0, which is a valid numa node id and cause memory allocation failure if node 0 is offline. Acked-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-09-26s390/qeth: reset layer2 attribute on layer switchJulian Wiedmann
[ Upstream commit 70551dc46ffa3555a0b5f3545b0cd87ab67fd002 ] After the subdriver's remove() routine has completed, the card's layer mode is undetermined again. Reflect this in the layer2 field. If qeth_dev_layer2_store() hits an error after remove() was called, the card _always_ requires a setup(), even if the previous layer mode is requested again. But qeth_dev_layer2_store() bails out early if the requested layer mode still matches the current one. So unless we reset the layer2 field, re-probing the card back to its previous mode is currently not possible. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-09-26s390/qeth: fix race in used-buffer accountingJulian Wiedmann
[ Upstream commit a702349a4099cd5a7bab0904689d8e0bf8dcd622 ] By updating q->used_buffers only _after_ do_QDIO() has completed, there is a potential race against the buffer's TX completion. In the unlikely case that the TX completion path wins, qeth_qdio_output_handler() would decrement the counter before qeth_flush_buffers() even incremented it. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-09-15s390/dasd: fix panic for failed online processingStefan Haberland
[ Upstream commit 7c6553d4db03350dad0110c3224194c19df76a8f ] Fix a panic that occurs for a device that got an error in dasd_eckd_check_characteristics() during online processing. For example the read configuration data command may have failed. If this error occurs the device is not being set online and the earlier invoked steps during online processing are rolled back. Therefore dasd_eckd_uncheck_device() is called which needs a valid private structure. But this pointer is not valid if dasd_eckd_check_characteristics() has failed. Check for a valid device->private pointer to prevent a panic. Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-09-15s390/dasd: fix hanging offline processing due to canceled workerStefan Haberland
[ Upstream commit 669f3765b755fd8739ab46ce3a9c6292ce8b3d2a ] During offline processing two worker threads are canceled without freeing the device reference which leads to a hanging offline process. Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-09-05s390/qdio: reset old sbal_state flagsJulian Wiedmann
commit 64e03ff72623b8c2ea89ca3cb660094e019ed4ae upstream. When allocating a new AOB fails, handle_outbound() is still capable of transmitting the selected buffer (just without async completion). But if a previous transfer on this queue slot used async completion, its sbal_state flags field is still set to QDIO_OUTBUF_STATE_FLAG_PENDING. So when the upper layer driver sees this stale flag, it expects an async completion that never happens. Fix this by unconditionally clearing the flags field. Fixes: 104ea556ee7f ("qdio: support asynchronous delivery of storage blocks") Cc: <stable@vger.kernel.org> #v3.2+ Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-06Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 fixes from Martin Schwidefsky: "A few more changes for v4.18: - wire up the two new system calls io_pgetevents and rseq - fix a register corruption in the expolines code for machines without EXRL - drastically reduce the memory utilization of the dasd driver - fix reference counting for KVM page table pages" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390: wire up rseq system call s390: wire up io_pgetevents system call s390/mm: fix refcount usage for 4K pgste s390/dasd: reduce the default queue depth and nr of hardware queues s390: Correct register corruption in critical section cleanup
2018-07-02s390/dasd: reduce the default queue depth and nr of hardware queuesStefan Haberland
Reduce the default values for the number of hardware queues and queue depth to significantly reduce the memory footprint of a DASD device. The memory consumption per DASD device reduces from approximately 40MB to approximately 1.5MB. This is necessary to build systems with a large number of DASD devices and a reasonable amount of memory. Performance measurements showed that good performance results are possible with the new default values even on systems with lots of CPUs and lots of alias devices. Fixes: e443343e509a ("s390/dasd: blk-mq conversion") Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-06-30s390/qeth: consistently re-enable device featuresJulian Wiedmann
commit e830baa9c3f0 ("qeth: restore device features after recovery") and commit ce3443564145 ("s390/qeth: rely on kernel for feature recovery") made sure that the HW functions for device features get re-programmed after recovery. But we missed that the same handling is also required when a card is first set offline (destroying all HW context), and then online again. Fix this by moving the re-enable action out of the recovery-only path. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-06-30s390/qeth: don't clobber buffer on async TX completionJulian Wiedmann
If qeth_qdio_output_handler() detects that a transmit requires async completion, it replaces the pending buffer's metadata object (qeth_qdio_out_buffer) so that this queue buffer can be re-used while the data is pending completion. Later when the CQ indicates async completion of such a metadata object, qeth_qdio_cq_handler() tries to free any data associated with this object (since HW has now completed the transfer). By calling qeth_clear_output_buffer(), it erronously operates on the queue buffer that _previously_ belonged to this transfer ... but which has been potentially re-used several times by now. This results in double-free's of the buffer's data, and failing transmits as the buffer descriptor is scrubbed in mid-air. The correct way of handling this situation is to 1. scrub the queue buffer when it is prepared for re-use, and 2. later obtain the data addresses from the async-completion notifier (ie. the AOB), instead of the queue buffer. All this only affects qeth devices used for af_iucv HiperTransport. Fixes: 0da9581ddb0f ("qeth: exploit asynchronous delivery of storage blocks") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-06-30s390/qeth: avoid using is_multicast_ether_addr_64bits on (u8 *)[6]Vasily Gorbik
*ether_addr*_64bits functions have been introduced to optimize performance critical paths, which access 6-byte ethernet address as u64 value to get "nice" assembly. A harmless hack works nicely on ethernet addresses shoved into a structure or a larger buffer, until busted by Kasan on smth like plain (u8 *)[6]. qeth_l2_set_mac_address calls qeth_l2_remove_mac passing u8 old_addr[ETH_ALEN] as an argument. Adding/removing macs for an ethernet adapter is not that performance critical. Moreover is_multicast_ether_addr_64bits itself on s390 is not faster than is_multicast_ether_addr: is_multicast_ether_addr(%r2) -> %r2 llc %r2,0(%r2) risbg %r2,%r2,63,191,0 is_multicast_ether_addr_64bits(%r2) -> %r2 llgc %r2,0(%r2) risbg %r2,%r2,63,191,0 So, let's just use is_multicast_ether_addr instead of is_multicast_ether_addr_64bits. Fixes: bcacfcbc82b4 ("s390/qeth: fix MAC address update sequence") Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-06-30s390/qeth: fix race when setting MAC addressJulian Wiedmann
When qeth_l2_set_mac_address() finds the card in a non-reachable state, it merely copies the new MAC address into dev->dev_addr so that __qeth_l2_set_online() can later register it with the HW. But __qeth_l2_set_online() may very well be running concurrently, so we can't trust the card state without appropriate locking: If the online sequence is past the point where it registers dev->dev_addr (but not yet in SOFTSETUP state), any address change needs to be properly programmed into the HW. Otherwise the netdevice ends up with a different MAC address than what's set in the HW, and inbound traffic is not forwarded as expected. This is most likely to occur for OSD in LPAR, where commit 21b1702af12e ("s390/qeth: improve fallback to random MAC address") now triggers eg. systemd to immediately change the MAC when the netdevice is registered with a NET_ADDR_RANDOM address. Fixes: bcacfcbc82b4 ("s390/qeth: fix MAC address update sequence") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-06-30Revert "s390/qeth: use Read device to query hypervisor for MAC"Julian Wiedmann
This reverts commit b7493e91c11a757cf0f8ab26989642ee4bb2c642. On its own, querying RDEV for a MAC address works fine. But when upgrading from a qeth that previously queried DDEV on a z/VM NIC (ie. any kernel with commit ec61bd2fd2a2), the RDEV query now returns a _different_ MAC address than the DDEV query. If the NIC is configured with MACPROTECT, z/VM apparently requires us to use the MAC that was initially returned (on DDEV) and registered. So after upgrading to a kernel that uses RDEV, the SETVMAC registration cmd for the new MAC address fails and we end up with a non-operabel interface. To avoid regressions on upgrade, switch back to using DDEV for the MAC address query. The downgrade path (first RDEV, later DDEV) is fine, in this case both queries return the same MAC address. Fixes: b7493e91c11a ("s390/qeth: use Read device to query hypervisor for MAC") Reported-by: Michal Kubecek <mkubecek@suse.com> Tested-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-06-19Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 updates from Martin Schwidefsky: "common I/O layer - Fix bit-fields crossing storage-unit boundaries in css_general_char dasd driver - Avoid a sparse warning in regard to the queue lock - Allocate the struct dasd_ccw_req as per request data. Only for internal I/O is the structure allocated separately - Remove the unused function dasd_kmalloc_set_cda - Save a few bytes in struct dasd_ccw_req by reordering fields - Convert remaining users of dasd_kmalloc_request to dasd_smalloc_request and remove the now unused function vfio/ccw - Refactor and improve pfn_array_alloc_pin/pfn_array_pin - Add a new tracepoint for failed vfio/ccw requests - Add a CCW translation improvement to accept more requests as valid - Bug fixes" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/dasd: only use preallocated requests s390/dasd: reshuffle struct dasd_ccw_req s390/dasd: remove dasd_kmalloc_set_cda s390/dasd: move dasd_ccw_req to per request data s390/dasd: simplify locking in process_final_queue s390/cio: sanitize css_general_characteristics definition vfio: ccw: add tracepoints for interesting error paths vfio: ccw: set ccw->cda to NULL defensively vfio: ccw: refactor and improve pfn_array_alloc_pin() vfio: ccw: shorten kernel doc description for pfn_array_pin() vfio: ccw: push down unsupported IDA check vfio: ccw: fix error return in vfio_ccw_sch_event s390/archrandom: Rework arch random implementation. s390/net: add pnetid support