Dre4m Shell
Server IP : 85.214.239.14  /  Your IP : 3.144.30.14
Web Server : Apache/2.4.62 (Debian)
System : Linux h2886529.stratoserver.net 4.9.0 #1 SMP Tue Jan 9 19:45:01 MSK 2024 x86_64
User : www-data ( 33)
PHP Version : 7.4.18
Disable Function : pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals,pcntl_unshare,
MySQL : OFF  |  cURL : OFF  |  WGET : ON  |  Perl : ON  |  Python : ON  |  Sudo : ON  |  Pkexec : OFF
Directory :  /proc/self/root/proc/2/cwd/proc/self/root/usr/share/doc/memcached/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Command :


[ HOME SHELL ]     

Current File : /proc/self/root/proc/2/cwd/proc/self/root/usr/share/doc/memcached/threads.txt
Multithreading in memcached *was* originally simple:

- One listener thread
- N "event worker" threads
- Some misc background threads

Each worker thread is assigned connections, and runs its own epoll loop. The
central hash table, LRU lists, and some statistics counters are covered by
global locks. Protocol parsing, data transfer happens in threads. Data lookups
and modifications happen under central locks.

THIS HAS CHANGED!

- A secondary small hash table of locks is used to lock an item by its hash
  value. This prevents multiple threads from acting on the same item at the
  same time.
- This secondary hash table is mapped to the central hash tables buckets. This
  allows multiple threads to access the hash table in parallel. Only one
  thread may read or write against a particular hash table bucket.
- atomic refcounts per item are used to manage garbage collection and
  mutability.

- When pulling an item off of the LRU tail for eviction or re-allocation, the
  system must attempt to lock the item's bucket, which is done with a trylock
  to avoid deadlocks. If a bucket is in use (and not by that thread) it will
  walk up the LRU a little in an attempt to fetch a non-busy item.

- Each LRU (and sub-LRU's in newer modes) has an independent lock.

- Raw accesses to the slab class are protected by a global slabs_lock. This
  is a short lock which covers pushing and popping free memory.

- item_lock must be held while modifying an item.
- slabs_lock must be held while modifying the ITEM_SLABBED flag bit within an item.
- ITEM_LINKED must not be set before an item has a key copied into it.
- items without ITEM_SLABBED set cannot have their memory zeroed out.

LOCK ORDERS:

(incomplete as of writing, sorry):

item_lock -> lru_lock -> slabs_lock

lru_lock -> item_trylock

Various stats_locks should never have other locks as dependencies.

Various locks exist for background threads. They can be used to pause the
thread execution or update settings while the threads are idle. They may call
item or lru locks.

A low priority issue:

- If you remove the per-thread stats lock, CPU usage goes down by less than a
  point of a percent, and it does not improve scalability.
- In my testing, the remaining global STATS_LOCK calls never seem to collide.

Yes, more stats can be moved to threads, and those locks can actually be
removed entirely on x86-64 systems. However my tests haven't shown that as
beneficial so far, so I've prioritized other work.

Anon7 - 2022
AnonSec Team