aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/msm/msm_gpu.c
AgeCommit message (Collapse)Author
2020-01-09drm/msm: include linux/sched/task.hArnd Bergmann
commit 70082a52f96a45650dfc3d8cdcd2c42bdac9f6f0 upstream. Without this header file, compile-testing may run into a missing declaration: drivers/gpu/drm/msm/msm_gpu.c:444:4: error: implicit declaration of function 'put_task_struct' [-Werror,-Wimplicit-function-declaration] Fixes: 482f96324a4e ("drm/msm: Fix task dump in gpu recovery") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-12-21drm/msm: Fix task dump in gpu recoverySharat Masetty
[ Upstream commit 482f96324a4e08818db7d75bb12beaaea6c9561d ] The current recovery code gets a pointer to the task struct and does a few things all within the rcu_read_lock. This puts constraints on the types of gfp flags that can be used within the rcu lock. This patch instead gets a reference to the task within the rcu lock and releases the lock immediately, this way the task stays afloat until we need it and we also get to use the desired gfp flags. Signed-off-by: Sharat Masetty <smasetty@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Sean Paul <seanpaul@chromium.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2018-11-21drm/msm/gpu: fix parameters in function msm_gpu_crashstate_captureAnders Roxell
[ Upstream commit 6969019f65b43afb6da6a26f1d9e55bbdfeebcd5 ] When CONFIG_DEV_COREDUMP isn't defined msm_gpu_crashstate_capture doesn't pass the correct parameters. drivers/gpu/drm/msm/msm_gpu.c: In function ‘recover_worker’: drivers/gpu/drm/msm/msm_gpu.c:479:34: error: passing argument 2 of ‘msm_gpu_crashstate_capture’ from incompatible pointer type [-Werror=incompatible-pointer-types] msm_gpu_crashstate_capture(gpu, submit, comm, cmd); ^~~~~~ drivers/gpu/drm/msm/msm_gpu.c:388:13: note: expected ‘char *’ but argument is of type ‘struct msm_gem_submit *’ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, char *comm, ^~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/gpu/drm/msm/msm_gpu.c:479:2: error: too many arguments to function ‘msm_gpu_crashstate_capture’ msm_gpu_crashstate_capture(gpu, submit, comm, cmd); ^~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/gpu/drm/msm/msm_gpu.c:388:13: note: declared here static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, char *comm, In current code the function msm_gpu_crashstate_capture parameters. Fixes: cdb95931dea3 ("drm/msm/gpu: Add the buffer objects from the submit to the crash dump") Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Reviewed-By: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-10drm/msm: Add A6XX device supportJordan Crouse
Add support for the A6XX family of Adreno GPUs. The biggest addition is the GMU (Graphics Management Unit) which takes over most of the power management of the GPU itself but in a ironic twist of fate needs a goodly amount of management itself. Add support for the A6XX core code, the GMU and the HFI (hardware firmware interface) queue that the CPU uses to communicate with the GMU. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-08-10drm/msm: Add a helper function to parse clock namesJordan Crouse
Add a helper function to parse the clock names and set up the bulk data so we can take advantage of the bulk clock functions instead of rolling our own. This is added as a helper function so the upcoming a6xx GMU code can also take advantage of it. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-07-30drm/msm/gpu: avoid deprecated do_gettimeofdayArnd Bergmann
All users of do_gettimeofday() have been removed, but this one recently crept in, along with an incorrect printing of the microseconds portion. This converts it to using ktime_get_real_timespec64() as a direct replacement, and adds the leading zeroes. I considered using monotonic times (ktime_get()) instead, but as this timestamp appears to only be used for humans rather than compared with other timestamps, the real time domain is probably good enough. Fixes: e43b045e2c82 ("drm/msm/gpu: Capture the state of the GPU") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-07-30drm/msm/gpu: Add the buffer objects from the submit to the crash dumpJordan Crouse
For hangs, dump copy out the contents of the buffer objects attached to the guilty submission and print them in the crash dump report. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-07-30drm/msm/gpu: Capture the GPU state on a GPU hangJordan Crouse
Capture the GPU state on a GPU hang and store it for later playback via the devcoredump facility. Only one crash state is stored at a time on the assumption that the first hang is usually the most interesting. The existing crash state can be cleared after capturing it and then a new one will be captured on the next hang. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-07-30drm/msm/gpu: Rearrange the code that collects the task during a hangJordan Crouse
Do a bit of cleanup to prepare for upcoming changes to pass the hanging task comm and cmdline to the crash dump function. v2: Use GFP_ATOMIC while holding the rcu lock per Chris Wilson Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-02-20drm/msm/gpu: Set number of clocks to 0 if the list allocation failsJordan Crouse
If we fail to allocate gpu->grp_clks reset the number of available clocks to zero to avoid referencing the missing array later. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-02-20drm/msm: Replace gem_object deprecated functionsSteve Kowalik
drm_gem_object_{reference,unreference,unreference_unlocked} are deprecated functions, and merely alias to the get/put functions. Switch to the new names. Signed-off-by: Steve Kowalik <steven@wedontsleep.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-01-10drm/msm: Add devfreq support for the GPUJordan Crouse
Add support for devfreq to dynamically control the GPU frequency. By default try to use the 'simple_ondemand' governor which can adjust the frequency based on GPU load. v2: Fix __aeabi_uldivmod issue from the 0 day bot and use devfreq_recommended_opp() as suggested by Rob. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-01-10drm/msm/gpu: Remove unused bus scaling codeJordan Crouse
Remove the downstream bus scaling code. It isn't needed for for compatibility with a downstream or vendor kernel. Get it out of the way to clear space for devfreq support. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-12-13drm/msm: gpu: Only sync fences on rings that existJordan Crouse
The fault recovery code tries to sync fences on all possible rings instead of only the rings that actually exist which will fault the kernel when the number of rings are less than the maximum amount. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-12-13drm/msm: free kstrdup'd cmdlineRob Clark
Fixes: 18bb8a6 'drm/msm: show task cmdline in gpu recovery messages' Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-11-21treewide: setup_timer() -> timer_setup()Kees Cook
This converts all remaining cases of the old setup_timer() API into using timer_setup(), where the callback argument is the structure already holding the struct timer_list. These should have no behavioral changes, since they just change which pointer is passed into the callback with the same available pointers after conversion. It handles the following examples, in addition to some other variations. Casting from unsigned long: void my_callback(unsigned long data) { struct something *ptr = (struct something *)data; ... } ... setup_timer(&ptr->my_timer, my_callback, ptr); and forced object casts: void my_callback(struct something *ptr) { ... } ... setup_timer(&ptr->my_timer, my_callback, (unsigned long)ptr); become: void my_callback(struct timer_list *t) { struct something *ptr = from_timer(ptr, t, my_timer); ... } ... timer_setup(&ptr->my_timer, my_callback, 0); Direct function assignments: void my_callback(unsigned long data) { struct something *ptr = (struct something *)data; ... } ... ptr->my_timer.function = my_callback; have a temporary cast added, along with converting the args: void my_callback(struct timer_list *t) { struct something *ptr = from_timer(ptr, t, my_timer); ... } ... ptr->my_timer.function = (TIMER_FUNC_TYPE)my_callback; And finally, callbacks without a data assignment: void my_callback(unsigned long data) { ... } ... setup_timer(&ptr->my_timer, my_callback, 0); have their argument renamed to verify they're unused during conversion: void my_callback(struct timer_list *unused) { ... } ... timer_setup(&ptr->my_timer, my_callback, 0); The conversion is done with the following Coccinelle script: spatch --very-quiet --all-includes --include-headers \ -I ./arch/x86/include -I ./arch/x86/include/generated \ -I ./include -I ./arch/x86/include/uapi \ -I ./arch/x86/include/generated/uapi -I ./include/uapi \ -I ./include/generated/uapi --include ./include/linux/kconfig.h \ --dir . \ --cocci-file ~/src/data/timer_setup.cocci @fix_address_of@ expression e; @@ setup_timer( -&(e) +&e , ...) // Update any raw setup_timer() usages that have a NULL callback, but // would otherwise match change_timer_function_usage, since the latter // will update all function assignments done in the face of a NULL // function initialization in setup_timer(). @change_timer_function_usage_NULL@ expression _E; identifier _timer; type _cast_data; @@ ( -setup_timer(&_E->_timer, NULL, _E); +timer_setup(&_E->_timer, NULL, 0); | -setup_timer(&_E->_timer, NULL, (_cast_data)_E); +timer_setup(&_E->_timer, NULL, 0); | -setup_timer(&_E._timer, NULL, &_E); +timer_setup(&_E._timer, NULL, 0); | -setup_timer(&_E._timer, NULL, (_cast_data)&_E); +timer_setup(&_E._timer, NULL, 0); ) @change_timer_function_usage@ expression _E; identifier _timer; struct timer_list _stl; identifier _callback; type _cast_func, _cast_data; @@ ( -setup_timer(&_E->_timer, _callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, &_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, _callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, &_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)&_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)&_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E._timer, _callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, _callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, &_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, &_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | _E->_timer@_stl.function = _callback; | _E->_timer@_stl.function = &_callback; | _E->_timer@_stl.function = (_cast_func)_callback; | _E->_timer@_stl.function = (_cast_func)&_callback; | _E._timer@_stl.function = _callback; | _E._timer@_stl.function = &_callback; | _E._timer@_stl.function = (_cast_func)_callback; | _E._timer@_stl.function = (_cast_func)&_callback; ) // callback(unsigned long arg) @change_callback_handle_cast depends on change_timer_function_usage@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _origtype; identifier _origarg; type _handletype; identifier _handle; @@ void _callback( -_origtype _origarg +struct timer_list *t ) { ( ... when != _origarg _handletype *_handle = -(_handletype *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle = -(void *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle; ... when != _handle _handle = -(_handletype *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle; ... when != _handle _handle = -(void *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg ) } // callback(unsigned long arg) without existing variable @change_callback_handle_cast_no_arg depends on change_timer_function_usage && !change_callback_handle_cast@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _origtype; identifier _origarg; type _handletype; @@ void _callback( -_origtype _origarg +struct timer_list *t ) { + _handletype *_origarg = from_timer(_origarg, t, _timer); + ... when != _origarg - (_handletype *)_origarg + _origarg ... when != _origarg } // Avoid already converted callbacks. @match_callback_converted depends on change_timer_function_usage && !change_callback_handle_cast && !change_callback_handle_cast_no_arg@ identifier change_timer_function_usage._callback; identifier t; @@ void _callback(struct timer_list *t) { ... } // callback(struct something *handle) @change_callback_handle_arg depends on change_timer_function_usage && !match_callback_converted && !change_callback_handle_cast && !change_callback_handle_cast_no_arg@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _handletype; identifier _handle; @@ void _callback( -_handletype *_handle +struct timer_list *t ) { + _handletype *_handle = from_timer(_handle, t, _timer); ... } // If change_callback_handle_arg ran on an empty function, remove // the added handler. @unchange_callback_handle_arg depends on change_timer_function_usage && change_callback_handle_arg@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _handletype; identifier _handle; identifier t; @@ void _callback(struct timer_list *t) { - _handletype *_handle = from_timer(_handle, t, _timer); } // We only want to refactor the setup_timer() data argument if we've found // the matching callback. This undoes changes in change_timer_function_usage. @unchange_timer_function_usage depends on change_timer_function_usage && !change_callback_handle_cast && !change_callback_handle_cast_no_arg && !change_callback_handle_arg@ expression change_timer_function_usage._E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type change_timer_function_usage._cast_data; @@ ( -timer_setup(&_E->_timer, _callback, 0); +setup_timer(&_E->_timer, _callback, (_cast_data)_E); | -timer_setup(&_E._timer, _callback, 0); +setup_timer(&_E._timer, _callback, (_cast_data)&_E); ) // If we fixed a callback from a .function assignment, fix the // assignment cast now. @change_timer_function_assignment depends on change_timer_function_usage && (change_callback_handle_cast || change_callback_handle_cast_no_arg || change_callback_handle_arg)@ expression change_timer_function_usage._E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type _cast_func; typedef TIMER_FUNC_TYPE; @@ ( _E->_timer.function = -_callback +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -&_callback +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -(_cast_func)_callback; +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -(_cast_func)&_callback +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -_callback +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -&_callback; +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -(_cast_func)_callback +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -(_cast_func)&_callback +(TIMER_FUNC_TYPE)_callback ; ) // Sometimes timer functions are called directly. Replace matched args. @change_timer_function_calls depends on change_timer_function_usage && (change_callback_handle_cast || change_callback_handle_cast_no_arg || change_callback_handle_arg)@ expression _E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type _cast_data; @@ _callback( ( -(_cast_data)_E +&_E->_timer | -(_cast_data)&_E +&_E._timer | -_E +&_E->_timer ) ) // If a timer has been configured without a data argument, it can be // converted without regard to the callback argument, since it is unused. @match_timer_function_unused_data@ expression _E; identifier _timer; identifier _callback; @@ ( -setup_timer(&_E->_timer, _callback, 0); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, _callback, 0L); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, _callback, 0UL); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E._timer, _callback, 0); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, _callback, 0L); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, _callback, 0UL); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_timer, _callback, 0); +timer_setup(&_timer, _callback, 0); | -setup_timer(&_timer, _callback, 0L); +timer_setup(&_timer, _callback, 0); | -setup_timer(&_timer, _callback, 0UL); +timer_setup(&_timer, _callback, 0); | -setup_timer(_timer, _callback, 0); +timer_setup(_timer, _callback, 0); | -setup_timer(_timer, _callback, 0L); +timer_setup(_timer, _callback, 0); | -setup_timer(_timer, _callback, 0UL); +timer_setup(_timer, _callback, 0); ) @change_callback_unused_data depends on match_timer_function_unused_data@ identifier match_timer_function_unused_data._callback; type _origtype; identifier _origarg; @@ void _callback( -_origtype _origarg +struct timer_list *unused ) { ... when != _origarg } Signed-off-by: Kees Cook <keescook@chromium.org>
2017-11-01drm/msm: use %z format modifier for printing size_tArnd Bergmann
The return type of ARRAY_SIZE() is size_t, so we have to use %zu instead of %lu to avoid this warning: drivers/gpu/drm/msm/msm_gpu.c: In function 'msm_gpu_init': drivers/gpu/drm/msm/msm_gpu.c:742:31: error: format '%lu' expects argument of type 'long unsigned int', but argument 7 has type 'unsigned int' [-Werror=format=] The warning it otherwise harmless as size_t is always the same size as unsigned long in all supported architectures, but gcc doesn't know that. Fixes: c2fceabca6d5 ("drm/msm: Support multiple ringbuffers") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28drm/msm: dump submits which triggered gpu hangRob Clark
Note we need to move update_fences() to after msm_rd_dump_submit(), otherwise the bo's referenced by the submit may no longer be valid. Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28drm/msm/rd: allow adding addition msg to top of dumpRob Clark
For faults or hangs, it is nice to be able to include a bit more information. Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28drm/msm: split rd debugfs fileRob Clark
Split into two instances, the existing $debugfs/rd which continues to dump all submits, and $debugfs/hangrd which will be used to dump just submits that cause gpu hangs (and eventually faults, but that will require some iommu framework enhancements). Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28drm/msm: show task cmdline in gpu recovery messagesRob Clark
Now that freedreno gallium driver defaults to using submit_queue task (render reordering), just showing task->comm is not so useful (ie. it is always "flush_queue:0"), so also dump the cmdline. This should also be more useful for piglit/shader_runner. Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28drm/msm: Implement preemption for A5XX targetsJordan Crouse
Implement preemption for A5XX targets - this allows multiple ringbuffers for different priorities with automatic preemption of a lower priority ringbuffer if a higher one is ready. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28drm/msm: Support multiple ringbuffersJordan Crouse
Add the infrastructure to support the idea of multiple ringbuffers. Assign each ringbuffer an id and use that as an index for the various ring specific operations. The biggest delta is to support legacy fences. Each fence gets its own sequence number but the legacy functions expect to use a unique integer. To handle this we return a unique identifier for each submission but map it to a specific ring/sequence under the covers. Newer users use a dma_fence pointer anyway so they don't care about the actual sequence ID or ring. The actual mechanics for multiple ringbuffers are very target specific so this code just allows for the possibility but still only defines one ringbuffer for each target family. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28drm/msm: Move memptrs to msm_gpuJordan Crouse
When we move to multiple ringbuffers we're going to store the data in the memptrs on a per-ring basis. In order to prepare for that move the current memptrs from the adreno namespace into msm_gpu. This is way cleaner and immediately lets us kill off some sub functions so there is much less cost later when we do move to per-ring structs. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-08-22drm/msm: Attach the GPU MMU when it is createdJordan Crouse
Currently the GPU MMU is attached in the adreno_gpu code but as more and more of the GPU initialization moves to the generic GPU path we have a need to map and use GPU memory earlier and earlier. There isn't any reason to defer attaching the MMU until later so attach it right after the address space is created so it can be used immediately. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-06-17drm/msm: Separate locking of buffer resources from struct_mutexSushmita Susheelendra
Buffer object specific resources like pages, domains, sg list need not be protected with struct_mutex. They can be protected with a buffer object level lock. This simplifies locking and makes it easier to avoid potential recursive locking scenarios for SVM involving mmap_sem and struct_mutex. This also removes unnecessary serialization when creating buffer objects, and also between buffer object creation and GPU command submission. Signed-off-by: Sushmita Susheelendra <ssusheel@codeaurora.org> [robclark: squash in handling new locking for shrinker] Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-06-16drm/msm: remove address-space idRob Clark
Now that the msm_gem supports an arbitrary number of vma's, we no longer need to assign an id (index) to each address space. So rip out the associated code. Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-06-16drm/msm: pass address-space to _get_iova() and friendsRob Clark
No functional change, that will come later. But this will make it easier to deal with dynamically created address spaces (ie. per- process pagetables for gpu). Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-06-16drm/msm: fix locking inconsistency for gpu->hw_init()Rob Clark
Most, but not all, paths where calling the with struct_mutex held. The fast-path in msm_gem_get_iova() (plus some sub-code-paths that only run the first time) was masking this issue. So lets just always hold struct_mutex for hw_init(). And sprinkle some WARN_ON()'s and might_lock() to avoid this sort of problem in the future. Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-06-16drm/msm: Add a struct to pass configuration to msm_gpu_init()Jordan Crouse
The amount of information that we need to pass into msm_gpu_init() is steadily increasing, so add a new struct to stabilize the function call and make it easier to add new configuration down the line. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-05-27drm/msm/gpu: check legacy clk names in get_clocks()Rob Clark
Otherwise if someone was using old bindings with "core_clk" instead of "core" as the clock name, we'd never find it and gpu would be stuck at 27MHz (or whatever it's slowest rate is). Fixes: 98db803 ("msm/drm: gpu: Dynamically locate the clocks from the device tree") Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-04-08msm/drm: gpu: Dynamically locate the clocks from the device treeJordan Crouse
Instead of using a fixed list of clock names use the clock-names list in the device tree to discover and get the list of clocks that we need. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-04-08drm/msm: Hard code the GPU "slow frequency"Jordan Crouse
Some A3XX and A4XX GPU targets required that the GPU clock be programmed to a non zero value when it was disabled so 27Mhz was chosen as the "invalid" frequency. Even though newer targets do not have the same clock restrictions we still write 27Mhz on clock disable and expect the clock subsystem to round down to zero. For unknown reasons even though the slow clock speed is always 27Mhz and it isn't actually a functional level the legacy device tree frequency tables always defined it and then did gymnastics to work around it. Instead of playing the same silly games just hard code the "slow" clock speed in the code as 27MHz and save ourselves a bit of infrastructure. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-04-08drm/msm: Make sure to detach the MMU during GPU cleanupJordan Crouse
We should be detaching the MMU before destroying the address space. To do this cleanly, the detach has to happen in adreno_gpu_cleanup() because it needs access to structs in adreno_gpu.c. Plus it is better symmetry to have the attach and detach at the same code level. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-04-08drm/msm/gpu: use pm-runtimeRob Clark
We need to use pm-runtime properly when IOMMU is using device_link() to control it's own clocks. Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-02-06drm/msm: drop _clk suffix from clk namesRob Clark
Suggested by Rob Herring. We still support the old names for compatibility with downstream android dt files. Cc: Rob Herring <robh@kernel.org> Signed-off-by: Rob Clark <robdclark@gmail.com> Reviewed-by: Eric Anholt <eric@anholt.net> Acked-by: Rob Herring <robh@kernel.org>
2016-11-28drm/msm: gpu: Add A5XX target supportJordan Crouse
Add support for the A5XX family of Adreno GPUs. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-11-28drm/msm: Remove 'src_clk' from adreno configurationJordan Crouse
The adreno code inherited a silly workaround from downstream from the bad old days before decent clock control. grp_clk[0] (named 'src_clk') doesn't actually exist - it was used as a proxy for whatever the core clock actually was (usually 'core_clk'). All targets should be able to correctly request 'core_clk' and get the right thing back so zap the anachronism and directly use grp_clk[0] to control the clock rate. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-11-28drm/msm: convert iova to 64bRob Clark
For a5xx the gpu is 64b so we need to change iova to 64b everywhere. On the display side, iova is still 32b so it can ignore the upper bits. (Although all the armv8 devices have an iommu that can map 64b pa to 32b iova.) Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-11-27drm/msm: support multiple address spacesRob Clark
We can have various combinations of 64b and 32b address space, ie. 64b CPU but 32b display and gpu, or 64b CPU and GPU but 32b display. So best to decouple the device iova's from mmap offset. Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-10-25dma-buf: Rename struct fence to dma_fenceChris Wilson
I plan to usurp the short name of struct fence for a core kernel struct, and so I need to rename the specialised fence/timeline for DMA operations to make room. A consensus was reached in https://lists.freedesktop.org/archives/dri-devel/2016-July/113083.html that making clear this fence applies to DMA operations was a good thing. Since then the patch has grown a bit as usage increases, so hopefully it remains a good thing! (v2...: rebase, rerun spatch) v3: Compile on msm, spotted a manual fixup that I broke. v4: Try again for msm, sorry Daniel coccinelle script: @@ @@ - struct fence + struct dma_fence @@ @@ - struct fence_ops + struct dma_fence_ops @@ @@ - struct fence_cb + struct dma_fence_cb @@ @@ - struct fence_array + struct dma_fence_array @@ @@ - enum fence_flag_bits + enum dma_fence_flag_bits @@ @@ ( - fence_init + dma_fence_init | - fence_release + dma_fence_release | - fence_free + dma_fence_free | - fence_get + dma_fence_get | - fence_get_rcu + dma_fence_get_rcu | - fence_put + dma_fence_put | - fence_signal + dma_fence_signal | - fence_signal_locked + dma_fence_signal_locked | - fence_default_wait + dma_fence_default_wait | - fence_add_callback + dma_fence_add_callback | - fence_remove_callback + dma_fence_remove_callback | - fence_enable_sw_signaling + dma_fence_enable_sw_signaling | - fence_is_signaled_locked + dma_fence_is_signaled_locked | - fence_is_signaled + dma_fence_is_signaled | - fence_is_later + dma_fence_is_later | - fence_later + dma_fence_later | - fence_wait_timeout + dma_fence_wait_timeout | - fence_wait_any_timeout + dma_fence_wait_any_timeout | - fence_wait + dma_fence_wait | - fence_context_alloc + dma_fence_context_alloc | - fence_array_create + dma_fence_array_create | - to_fence_array + to_dma_fence_array | - fence_is_array + dma_fence_is_array | - trace_fence_emit + trace_dma_fence_emit | - FENCE_TRACE + DMA_FENCE_TRACE | - FENCE_WARN + DMA_FENCE_WARN | - FENCE_ERR + DMA_FENCE_ERR ) ( ... ) Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk> Acked-by: Sumit Semwal <sumit.semwal@linaro.org> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161025120045.28839-1-chris@chris-wilson.co.uk
2016-09-15drm/msm: move fence allocation out of msm_gpu_submit()Rob Clark
Prep work for next patch. Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-05-08drm/msm: print offender task name on hangcheck recoveryRob Clark
Track the pid per submit, so we can print the name of the task which submitted the batch that caused the gpu to hang. Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-05-08drm/msm: fix leak in failed submit pathRob Clark
Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-05-08drm/msm: drop return from gpu->submit()Rob Clark
At this point, there is nothing left to fail. And submit already has a fence assigned and is added to the submit_list. Any problems from here on out are asynchronous (ie. hangcheck/recovery). Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-05-08drm/msm: 'struct fence' conversionRob Clark
Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-05-08drm/msm: introduce msm_fence_contextRob Clark
Better encapsulate the per-timeline stuff into fence-context. For now there is just a single fence-context, but eventually we'll also have one per-CRTC to enable fully explicit fencing. Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-05-08drm/msm/gpu: simplify tracking in-flight bo'sRob Clark
Since we already track the array of bo's in the submit object, just unconditionally take and drop ref's per submit (rather than only taking ref's if bo is not already active). This simplifies later patches. Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-05-08drm/msm: move fence code to it's own fileRob Clark
Signed-off-by: Rob Clark <robdclark@gmail.com>
2015-10-22drm/msm: Fix IOMMU clean up path in case msm_iommu_new() failsStephane Viau
msm_iommu_new() can fail and this change makes sure that we detect the failure and free the allocated domain before going any further. Signed-off-by: Stephane Viau <sviau@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>