unevictable-lru.rst (26707B)
1.. _unevictable_lru: 2 3============================== 4Unevictable LRU Infrastructure 5============================== 6 7.. contents:: :local: 8 9 10Introduction 11============ 12 13This document describes the Linux memory manager's "Unevictable LRU" 14infrastructure and the use of this to manage several types of "unevictable" 15pages. 16 17The document attempts to provide the overall rationale behind this mechanism 18and the rationale for some of the design decisions that drove the 19implementation. The latter design rationale is discussed in the context of an 20implementation description. Admittedly, one can obtain the implementation 21details - the "what does it do?" - by reading the code. One hopes that the 22descriptions below add value by provide the answer to "why does it do that?". 23 24 25 26The Unevictable LRU 27=================== 28 29The Unevictable LRU facility adds an additional LRU list to track unevictable 30pages and to hide these pages from vmscan. This mechanism is based on a patch 31by Larry Woodman of Red Hat to address several scalability problems with page 32reclaim in Linux. The problems have been observed at customer sites on large 33memory x86_64 systems. 34 35To illustrate this with an example, a non-NUMA x86_64 platform with 128GB of 36main memory will have over 32 million 4k pages in a single node. When a large 37fraction of these pages are not evictable for any reason [see below], vmscan 38will spend a lot of time scanning the LRU lists looking for the small fraction 39of pages that are evictable. This can result in a situation where all CPUs are 40spending 100% of their time in vmscan for hours or days on end, with the system 41completely unresponsive. 42 43The unevictable list addresses the following classes of unevictable pages: 44 45 * Those owned by ramfs. 46 47 * Those mapped into SHM_LOCK'd shared memory regions. 48 49 * Those mapped into VM_LOCKED [mlock()ed] VMAs. 50 51The infrastructure may also be able to handle other conditions that make pages 52unevictable, either by definition or by circumstance, in the future. 53 54 55The Unevictable LRU Page List 56----------------------------- 57 58The Unevictable LRU page list is a lie. It was never an LRU-ordered list, but a 59companion to the LRU-ordered anonymous and file, active and inactive page lists; 60and now it is not even a page list. But following familiar convention, here in 61this document and in the source, we often imagine it as a fifth LRU page list. 62 63The Unevictable LRU infrastructure consists of an additional, per-node, LRU list 64called the "unevictable" list and an associated page flag, PG_unevictable, to 65indicate that the page is being managed on the unevictable list. 66 67The PG_unevictable flag is analogous to, and mutually exclusive with, the 68PG_active flag in that it indicates on which LRU list a page resides when 69PG_lru is set. 70 71The Unevictable LRU infrastructure maintains unevictable pages as if they were 72on an additional LRU list for a few reasons: 73 74 (1) We get to "treat unevictable pages just like we treat other pages in the 75 system - which means we get to use the same code to manipulate them, the 76 same code to isolate them (for migrate, etc.), the same code to keep track 77 of the statistics, etc..." [Rik van Riel] 78 79 (2) We want to be able to migrate unevictable pages between nodes for memory 80 defragmentation, workload management and memory hotplug. The Linux kernel 81 can only migrate pages that it can successfully isolate from the LRU 82 lists (or "Movable" pages: outside of consideration here). If we were to 83 maintain pages elsewhere than on an LRU-like list, where they can be 84 detected by isolate_lru_page(), we would prevent their migration. 85 86The unevictable list does not differentiate between file-backed and anonymous, 87swap-backed pages. This differentiation is only important while the pages are, 88in fact, evictable. 89 90The unevictable list benefits from the "arrayification" of the per-node LRU 91lists and statistics originally proposed and posted by Christoph Lameter. 92 93 94Memory Control Group Interaction 95-------------------------------- 96 97The unevictable LRU facility interacts with the memory control group [aka 98memory controller; see Documentation/admin-guide/cgroup-v1/memory.rst] by 99extending the lru_list enum. 100 101The memory controller data structure automatically gets a per-node unevictable 102list as a result of the "arrayification" of the per-node LRU lists (one per 103lru_list enum element). The memory controller tracks the movement of pages to 104and from the unevictable list. 105 106When a memory control group comes under memory pressure, the controller will 107not attempt to reclaim pages on the unevictable list. This has a couple of 108effects: 109 110 (1) Because the pages are "hidden" from reclaim on the unevictable list, the 111 reclaim process can be more efficient, dealing only with pages that have a 112 chance of being reclaimed. 113 114 (2) On the other hand, if too many of the pages charged to the control group 115 are unevictable, the evictable portion of the working set of the tasks in 116 the control group may not fit into the available memory. This can cause 117 the control group to thrash or to OOM-kill tasks. 118 119 120.. _mark_addr_space_unevict: 121 122Marking Address Spaces Unevictable 123---------------------------------- 124 125For facilities such as ramfs none of the pages attached to the address space 126may be evicted. To prevent eviction of any such pages, the AS_UNEVICTABLE 127address space flag is provided, and this can be manipulated by a filesystem 128using a number of wrapper functions: 129 130 * ``void mapping_set_unevictable(struct address_space *mapping);`` 131 132 Mark the address space as being completely unevictable. 133 134 * ``void mapping_clear_unevictable(struct address_space *mapping);`` 135 136 Mark the address space as being evictable. 137 138 * ``int mapping_unevictable(struct address_space *mapping);`` 139 140 Query the address space, and return true if it is completely 141 unevictable. 142 143These are currently used in three places in the kernel: 144 145 (1) By ramfs to mark the address spaces of its inodes when they are created, 146 and this mark remains for the life of the inode. 147 148 (2) By SYSV SHM to mark SHM_LOCK'd address spaces until SHM_UNLOCK is called. 149 Note that SHM_LOCK is not required to page in the locked pages if they're 150 swapped out; the application must touch the pages manually if it wants to 151 ensure they're in memory. 152 153 (3) By the i915 driver to mark pinned address space until it's unpinned. The 154 amount of unevictable memory marked by i915 driver is roughly the bounded 155 object size in debugfs/dri/0/i915_gem_objects. 156 157 158Detecting Unevictable Pages 159--------------------------- 160 161The function page_evictable() in mm/internal.h determines whether a page is 162evictable or not using the query function outlined above [see section 163:ref:`Marking address spaces unevictable <mark_addr_space_unevict>`] 164to check the AS_UNEVICTABLE flag. 165 166For address spaces that are so marked after being populated (as SHM regions 167might be), the lock action (e.g. SHM_LOCK) can be lazy, and need not populate 168the page tables for the region as does, for example, mlock(), nor need it make 169any special effort to push any pages in the SHM_LOCK'd area to the unevictable 170list. Instead, vmscan will do this if and when it encounters the pages during 171a reclamation scan. 172 173On an unlock action (such as SHM_UNLOCK), the unlocker (e.g. shmctl()) must scan 174the pages in the region and "rescue" them from the unevictable list if no other 175condition is keeping them unevictable. If an unevictable region is destroyed, 176the pages are also "rescued" from the unevictable list in the process of 177freeing them. 178 179page_evictable() also checks for mlocked pages by testing an additional page 180flag, PG_mlocked (as wrapped by PageMlocked()), which is set when a page is 181faulted into a VM_LOCKED VMA, or found in a VMA being VM_LOCKED. 182 183 184Vmscan's Handling of Unevictable Pages 185-------------------------------------- 186 187If unevictable pages are culled in the fault path, or moved to the unevictable 188list at mlock() or mmap() time, vmscan will not encounter the pages until they 189have become evictable again (via munlock() for example) and have been "rescued" 190from the unevictable list. However, there may be situations where we decide, 191for the sake of expediency, to leave an unevictable page on one of the regular 192active/inactive LRU lists for vmscan to deal with. vmscan checks for such 193pages in all of the shrink_{active|inactive|page}_list() functions and will 194"cull" such pages that it encounters: that is, it diverts those pages to the 195unevictable list for the memory cgroup and node being scanned. 196 197There may be situations where a page is mapped into a VM_LOCKED VMA, but the 198page is not marked as PG_mlocked. Such pages will make it all the way to 199shrink_active_list() or shrink_page_list() where they will be detected when 200vmscan walks the reverse map in page_referenced() or try_to_unmap(). The page 201is culled to the unevictable list when it is released by the shrinker. 202 203To "cull" an unevictable page, vmscan simply puts the page back on the LRU list 204using putback_lru_page() - the inverse operation to isolate_lru_page() - after 205dropping the page lock. Because the condition which makes the page unevictable 206may change once the page is unlocked, __pagevec_lru_add_fn() will recheck the 207unevictable state of a page before placing it on the unevictable list. 208 209 210MLOCKED Pages 211============= 212 213The unevictable page list is also useful for mlock(), in addition to ramfs and 214SYSV SHM. Note that mlock() is only available in CONFIG_MMU=y situations; in 215NOMMU situations, all mappings are effectively mlocked. 216 217 218History 219------- 220 221The "Unevictable mlocked Pages" infrastructure is based on work originally 222posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU". 223Nick posted his patch as an alternative to a patch posted by Christoph Lameter 224to achieve the same objective: hiding mlocked pages from vmscan. 225 226In Nick's patch, he used one of the struct page LRU list link fields as a count 227of VM_LOCKED VMAs that map the page (Rik van Riel had the same idea three years 228earlier). But this use of the link field for a count prevented the management 229of the pages on an LRU list, and thus mlocked pages were not migratable as 230isolate_lru_page() could not detect them, and the LRU list link field was not 231available to the migration subsystem. 232 233Nick resolved this by putting mlocked pages back on the LRU list before 234attempting to isolate them, thus abandoning the count of VM_LOCKED VMAs. When 235Nick's patch was integrated with the Unevictable LRU work, the count was 236replaced by walking the reverse map when munlocking, to determine whether any 237other VM_LOCKED VMAs still mapped the page. 238 239However, walking the reverse map for each page when munlocking was ugly and 240inefficient, and could lead to catastrophic contention on a file's rmap lock, 241when many processes which had it mlocked were trying to exit. In 5.18, the 242idea of keeping mlock_count in Unevictable LRU list link field was revived and 243put to work, without preventing the migration of mlocked pages. This is why 244the "Unevictable LRU list" cannot be a linked list of pages now; but there was 245no use for that linked list anyway - though its size is maintained for meminfo. 246 247 248Basic Management 249---------------- 250 251mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable 252pages. When such a page has been "noticed" by the memory management subsystem, 253the page is marked with the PG_mlocked flag. This can be manipulated using the 254PageMlocked() functions. 255 256A PG_mlocked page will be placed on the unevictable list when it is added to 257the LRU. Such pages can be "noticed" by memory management in several places: 258 259 (1) in the mlock()/mlock2()/mlockall() system call handlers; 260 261 (2) in the mmap() system call handler when mmapping a region with the 262 MAP_LOCKED flag; 263 264 (3) mmapping a region in a task that has called mlockall() with the MCL_FUTURE 265 flag; 266 267 (4) in the fault path and when a VM_LOCKED stack segment is expanded; or 268 269 (5) as mentioned above, in vmscan:shrink_page_list() when attempting to 270 reclaim a page in a VM_LOCKED VMA by page_referenced() or try_to_unmap(). 271 272mlocked pages become unlocked and rescued from the unevictable list when: 273 274 (1) mapped in a range unlocked via the munlock()/munlockall() system calls; 275 276 (2) munmap()'d out of the last VM_LOCKED VMA that maps the page, including 277 unmapping at task exit; 278 279 (3) when the page is truncated from the last VM_LOCKED VMA of an mmapped file; 280 or 281 282 (4) before a page is COW'd in a VM_LOCKED VMA. 283 284 285mlock()/mlock2()/mlockall() System Call Handling 286------------------------------------------------ 287 288mlock(), mlock2() and mlockall() system call handlers proceed to mlock_fixup() 289for each VMA in the range specified by the call. In the case of mlockall(), 290this is the entire active address space of the task. Note that mlock_fixup() 291is used for both mlocking and munlocking a range of memory. A call to mlock() 292an already VM_LOCKED VMA, or to munlock() a VMA that is not VM_LOCKED, is 293treated as a no-op and mlock_fixup() simply returns. 294 295If the VMA passes some filtering as described in "Filtering Special VMAs" 296below, mlock_fixup() will attempt to merge the VMA with its neighbors or split 297off a subset of the VMA if the range does not cover the entire VMA. Any pages 298already present in the VMA are then marked as mlocked by mlock_page() via 299mlock_pte_range() via walk_page_range() via mlock_vma_pages_range(). 300 301Before returning from the system call, do_mlock() or mlockall() will call 302__mm_populate() to fault in the remaining pages via get_user_pages() and to 303mark those pages as mlocked as they are faulted. 304 305Note that the VMA being mlocked might be mapped with PROT_NONE. In this case, 306get_user_pages() will be unable to fault in the pages. That's okay. If pages 307do end up getting faulted into this VM_LOCKED VMA, they will be handled in the 308fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled. 309 310For each PTE (or PMD) being faulted into a VMA, the page add rmap function 311calls mlock_vma_page(), which calls mlock_page() when the VMA is VM_LOCKED 312(unless it is a PTE mapping of a part of a transparent huge page). Or when 313it is a newly allocated anonymous page, lru_cache_add_inactive_or_unevictable() 314calls mlock_new_page() instead: similar to mlock_page(), but can make better 315judgments, since this page is held exclusively and known not to be on LRU yet. 316 317mlock_page() sets PageMlocked immediately, then places the page on the CPU's 318mlock pagevec, to batch up the rest of the work to be done under lru_lock by 319__mlock_page(). __mlock_page() sets PageUnevictable, initializes mlock_count 320and moves the page to unevictable state ("the unevictable LRU", but with 321mlock_count in place of LRU threading). Or if the page was already PageLRU 322and PageUnevictable and PageMlocked, it simply increments the mlock_count. 323 324But in practice that may not work ideally: the page may not yet be on an LRU, or 325it may have been temporarily isolated from LRU. In such cases the mlock_count 326field cannot be touched, but will be set to 0 later when __pagevec_lru_add_fn() 327returns the page to "LRU". Races prohibit mlock_count from being set to 1 then: 328rather than risk stranding a page indefinitely as unevictable, always err with 329mlock_count on the low side, so that when munlocked the page will be rescued to 330an evictable LRU, then perhaps be mlocked again later if vmscan finds it in a 331VM_LOCKED VMA. 332 333 334Filtering Special VMAs 335---------------------- 336 337mlock_fixup() filters several classes of "special" VMAs: 338 3391) VMAs with VM_IO or VM_PFNMAP set are skipped entirely. The pages behind 340 these mappings are inherently pinned, so we don't need to mark them as 341 mlocked. In any case, most of the pages have no struct page in which to so 342 mark the page. Because of this, get_user_pages() will fail for these VMAs, 343 so there is no sense in attempting to visit them. 344 3452) VMAs mapping hugetlbfs page are already effectively pinned into memory. We 346 neither need nor want to mlock() these pages. But __mm_populate() includes 347 hugetlbfs ranges, allocating the huge pages and populating the PTEs. 348 3493) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages, 350 such as the VDSO page, relay channel pages, etc. These pages are inherently 351 unevictable and are not managed on the LRU lists. __mm_populate() includes 352 these ranges, populating the PTEs if not already populated. 353 3544) VMAs with VM_MIXEDMAP set are not marked VM_LOCKED, but __mm_populate() 355 includes these ranges, populating the PTEs if not already populated. 356 357Note that for all of these special VMAs, mlock_fixup() does not set the 358VM_LOCKED flag. Therefore, we won't have to deal with them later during 359munlock(), munmap() or task exit. Neither does mlock_fixup() account these 360VMAs against the task's "locked_vm". 361 362 363munlock()/munlockall() System Call Handling 364------------------------------------------- 365 366The munlock() and munlockall() system calls are handled by the same 367mlock_fixup() function as mlock(), mlock2() and mlockall() system calls are. 368If called to munlock an already munlocked VMA, mlock_fixup() simply returns. 369Because of the VMA filtering discussed above, VM_LOCKED will not be set in 370any "special" VMAs. So, those VMAs will be ignored for munlock. 371 372If the VMA is VM_LOCKED, mlock_fixup() again attempts to merge or split off the 373specified range. All pages in the VMA are then munlocked by munlock_page() via 374mlock_pte_range() via walk_page_range() via mlock_vma_pages_range() - the same 375function used when mlocking a VMA range, with new flags for the VMA indicating 376that it is munlock() being performed. 377 378munlock_page() uses the mlock pagevec to batch up work to be done under 379lru_lock by __munlock_page(). __munlock_page() decrements the page's 380mlock_count, and when that reaches 0 it clears PageMlocked and clears 381PageUnevictable, moving the page from unevictable state to inactive LRU. 382 383But in practice that may not work ideally: the page may not yet have reached 384"the unevictable LRU", or it may have been temporarily isolated from it. In 385those cases its mlock_count field is unusable and must be assumed to be 0: so 386that the page will be rescued to an evictable LRU, then perhaps be mlocked 387again later if vmscan finds it in a VM_LOCKED VMA. 388 389 390Migrating MLOCKED Pages 391----------------------- 392 393A page that is being migrated has been isolated from the LRU lists and is held 394locked across unmapping of the page, updating the page's address space entry 395and copying the contents and state, until the page table entry has been 396replaced with an entry that refers to the new page. Linux supports migration 397of mlocked pages and other unevictable pages. PG_mlocked is cleared from the 398the old page when it is unmapped from the last VM_LOCKED VMA, and set when the 399new page is mapped in place of migration entry in a VM_LOCKED VMA. If the page 400was unevictable because mlocked, PG_unevictable follows PG_mlocked; but if the 401page was unevictable for other reasons, PG_unevictable is copied explicitly. 402 403Note that page migration can race with mlocking or munlocking of the same page. 404There is mostly no problem since page migration requires unmapping all PTEs of 405the old page (including munlock where VM_LOCKED), then mapping in the new page 406(including mlock where VM_LOCKED). The page table locks provide sufficient 407synchronization. 408 409However, since mlock_vma_pages_range() starts by setting VM_LOCKED on a VMA, 410before mlocking any pages already present, if one of those pages were migrated 411before mlock_pte_range() reached it, it would get counted twice in mlock_count. 412To prevent that, mlock_vma_pages_range() temporarily marks the VMA as VM_IO, 413so that mlock_vma_page() will skip it. 414 415To complete page migration, we place the old and new pages back onto the LRU 416afterwards. The "unneeded" page - old page on success, new page on failure - 417is freed when the reference count held by the migration process is released. 418 419 420Compacting MLOCKED Pages 421------------------------ 422 423The memory map can be scanned for compactable regions and the default behavior 424is to let unevictable pages be moved. /proc/sys/vm/compact_unevictable_allowed 425controls this behavior (see Documentation/admin-guide/sysctl/vm.rst). The work 426of compaction is mostly handled by the page migration code and the same work 427flow as described in Migrating MLOCKED Pages will apply. 428 429 430MLOCKING Transparent Huge Pages 431------------------------------- 432 433A transparent huge page is represented by a single entry on an LRU list. 434Therefore, we can only make unevictable an entire compound page, not 435individual subpages. 436 437If a user tries to mlock() part of a huge page, and no user mlock()s the 438whole of the huge page, we want the rest of the page to be reclaimable. 439 440We cannot just split the page on partial mlock() as split_huge_page() can 441fail and a new intermittent failure mode for the syscall is undesirable. 442 443We handle this by keeping PTE-mlocked huge pages on evictable LRU lists: 444the PMD on the border of a VM_LOCKED VMA will be split into a PTE table. 445 446This way the huge page is accessible for vmscan. Under memory pressure the 447page will be split, subpages which belong to VM_LOCKED VMAs will be moved 448to the unevictable LRU and the rest can be reclaimed. 449 450/proc/meminfo's Unevictable and Mlocked amounts do not include those parts 451of a transparent huge page which are mapped only by PTEs in VM_LOCKED VMAs. 452 453 454mmap(MAP_LOCKED) System Call Handling 455------------------------------------- 456 457In addition to the mlock(), mlock2() and mlockall() system calls, an application 458can request that a region of memory be mlocked by supplying the MAP_LOCKED flag 459to the mmap() call. There is one important and subtle difference here, though. 460mmap() + mlock() will fail if the range cannot be faulted in (e.g. because 461mm_populate fails) and returns with ENOMEM while mmap(MAP_LOCKED) will not fail. 462The mmaped area will still have properties of the locked area - pages will not 463get swapped out - but major page faults to fault memory in might still happen. 464 465Furthermore, any mmap() call or brk() call that expands the heap by a task 466that has previously called mlockall() with the MCL_FUTURE flag will result 467in the newly mapped memory being mlocked. Before the unevictable/mlock 468changes, the kernel simply called make_pages_present() to allocate pages 469and populate the page table. 470 471To mlock a range of memory under the unevictable/mlock infrastructure, 472the mmap() handler and task address space expansion functions call 473populate_vma_page_range() specifying the vma and the address range to mlock. 474 475 476munmap()/exit()/exec() System Call Handling 477------------------------------------------- 478 479When unmapping an mlocked region of memory, whether by an explicit call to 480munmap() or via an internal unmap from exit() or exec() processing, we must 481munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages. 482Before the unevictable/mlock changes, mlocking did not mark the pages in any 483way, so unmapping them required no processing. 484 485For each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls 486munlock_vma_page(), which calls munlock_page() when the VMA is VM_LOCKED 487(unless it was a PTE mapping of a part of a transparent huge page). 488 489munlock_page() uses the mlock pagevec to batch up work to be done under 490lru_lock by __munlock_page(). __munlock_page() decrements the page's 491mlock_count, and when that reaches 0 it clears PageMlocked and clears 492PageUnevictable, moving the page from unevictable state to inactive LRU. 493 494But in practice that may not work ideally: the page may not yet have reached 495"the unevictable LRU", or it may have been temporarily isolated from it. In 496those cases its mlock_count field is unusable and must be assumed to be 0: so 497that the page will be rescued to an evictable LRU, then perhaps be mlocked 498again later if vmscan finds it in a VM_LOCKED VMA. 499 500 501Truncating MLOCKED Pages 502------------------------ 503 504File truncation or hole punching forcibly unmaps the deleted pages from 505userspace; truncation even unmaps and deletes any private anonymous pages 506which had been Copied-On-Write from the file pages now being truncated. 507 508Mlocked pages can be munlocked and deleted in this way: like with munmap(), 509for each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls 510munlock_vma_page(), which calls munlock_page() when the VMA is VM_LOCKED 511(unless it was a PTE mapping of a part of a transparent huge page). 512 513However, if there is a racing munlock(), since mlock_vma_pages_range() starts 514munlocking by clearing VM_LOCKED from a VMA, before munlocking all the pages 515present, if one of those pages were unmapped by truncation or hole punch before 516mlock_pte_range() reached it, it would not be recognized as mlocked by this VMA, 517and would not be counted out of mlock_count. In this rare case, a page may 518still appear as PageMlocked after it has been fully unmapped: and it is left to 519release_pages() (or __page_cache_release()) to clear it and update statistics 520before freeing (this event is counted in /proc/vmstat unevictable_pgs_cleared, 521which is usually 0). 522 523 524Page Reclaim in shrink_*_list() 525------------------------------- 526 527vmscan's shrink_active_list() culls any obviously unevictable pages - 528i.e. !page_evictable(page) pages - diverting those to the unevictable list. 529However, shrink_active_list() only sees unevictable pages that made it onto the 530active/inactive LRU lists. Note that these pages do not have PageUnevictable 531set - otherwise they would be on the unevictable list and shrink_active_list() 532would never see them. 533 534Some examples of these unevictable pages on the LRU lists are: 535 536 (1) ramfs pages that have been placed on the LRU lists when first allocated. 537 538 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to 539 allocate or fault in the pages in the shared memory region. This happens 540 when an application accesses the page the first time after SHM_LOCK'ing 541 the segment. 542 543 (3) pages still mapped into VM_LOCKED VMAs, which should be marked mlocked, 544 but events left mlock_count too low, so they were munlocked too early. 545 546vmscan's shrink_inactive_list() and shrink_page_list() also divert obviously 547unevictable pages found on the inactive lists to the appropriate memory cgroup 548and node unevictable list. 549 550rmap's page_referenced_one(), called via vmscan's shrink_active_list() or 551shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_list(), 552check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_page() 553to correct them. Such pages are culled to the unevictable list when released 554by the shrinker.