diff options
| author | Jason Gunthorpe <jgg@mellanox.com> | 2019-08-21 14:12:29 -0300 |
|---|---|---|
| committer | Jason Gunthorpe <jgg@mellanox.com> | 2019-08-21 20:58:18 -0300 |
| commit | daa138a58c802e7b4c2fb73f9b85bb082616ef43 (patch) | |
| tree | be913e8e3745bb367d2ba371598f447649102cfc /include/linux/wait.h | |
| parent | 6869b7b206595ae0e326f59719090351eb8f4f5d (diff) | |
| parent | fba0e448a2c5b297a4ddc1ec4e48f4aa6600a1c9 (diff) | |
| download | cachepc-linux-daa138a58c802e7b4c2fb73f9b85bb082616ef43.tar.gz cachepc-linux-daa138a58c802e7b4c2fb73f9b85bb082616ef43.zip | |
Merge branch 'odp_fixes' into hmm.git
From rdma.git
Jason Gunthorpe says:
====================
This is a collection of general cleanups for ODP to clarify some of the
flows around umem creation and use of the interval tree.
====================
The branch is based on v5.3-rc5 due to dependencies, and is being taken
into hmm.git due to dependencies in the next patches.
* odp_fixes:
RDMA/mlx5: Use odp instead of mr->umem in pagefault_mr
RDMA/mlx5: Use ib_umem_start instead of umem.address
RDMA/core: Make invalidate_range a device operation
RDMA/odp: Use kvcalloc for the dma_list and page_list
RDMA/odp: Check for overflow when computing the umem_odp end
RDMA/odp: Provide ib_umem_odp_release() to undo the allocs
RDMA/odp: Split creating a umem_odp from ib_umem_get
RDMA/odp: Make the three ways to create a umem_odp clear
RMDA/odp: Consolidate umem_odp initialization
RDMA/odp: Make it clearer when a umem is an implicit ODP umem
RDMA/odp: Iterate over the whole rbtree directly
RDMA/odp: Use the common interval tree library instead of generic
RDMA/mlx5: Fix MR npages calculation for IB_ACCESS_HUGETLB
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Diffstat (limited to 'include/linux/wait.h')
| -rw-r--r-- | include/linux/wait.h | 13 |
1 files changed, 13 insertions, 0 deletions
diff --git a/include/linux/wait.h b/include/linux/wait.h index b6f77cf60dd7..30c515520fb2 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -127,6 +127,19 @@ static inline int waitqueue_active(struct wait_queue_head *wq_head) } /** + * wq_has_single_sleeper - check if there is only one sleeper + * @wq_head: wait queue head + * + * Returns true of wq_head has only one sleeper on the list. + * + * Please refer to the comment for waitqueue_active. + */ +static inline bool wq_has_single_sleeper(struct wait_queue_head *wq_head) +{ + return list_is_singular(&wq_head->head); +} + +/** * wq_has_sleeper - check if there are any waiting processes * @wq_head: wait queue head * |
