design.txt 43 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270
  1. NTDB: Redesigning The Trivial DataBase
  2. Rusty Russell, IBM Corporation
  3. 19 June 2012
  4. Abstract
  5. The Trivial DataBase on-disk format is 32 bits; with usage cases
  6. heading towards the 4G limit, that must change. This required
  7. breakage provides an opportunity to revisit TDB's other design
  8. decisions and reassess them.
  9. 1 Introduction
  10. The Trivial DataBase was originally written by Andrew Tridgell as
  11. a simple key/data pair storage system with the same API as dbm,
  12. but allowing multiple readers and writers while being small
  13. enough (< 1000 lines of C) to include in SAMBA. The simple design
  14. created in 1999 has proven surprisingly robust and performant,
  15. used in Samba versions 3 and 4 as well as numerous other
  16. projects. Its useful life was greatly increased by the
  17. (backwards-compatible!) addition of transaction support in 2005.
  18. The wider variety and greater demands of TDB-using code has lead
  19. to some organic growth of the API, as well as some compromises on
  20. the implementation. None of these, by themselves, are seen as
  21. show-stoppers, but the cumulative effect is to a loss of elegance
  22. over the initial, simple TDB implementation. Here is a table of
  23. the approximate number of lines of implementation code and number
  24. of API functions at the end of each year:
  25. +-----------+----------------+--------------------------------+
  26. | Year End | API Functions | Lines of C Code Implementation |
  27. +-----------+----------------+--------------------------------+
  28. +-----------+----------------+--------------------------------+
  29. | 1999 | 13 | 1195 |
  30. +-----------+----------------+--------------------------------+
  31. | 2000 | 24 | 1725 |
  32. +-----------+----------------+--------------------------------+
  33. | 2001 | 32 | 2228 |
  34. +-----------+----------------+--------------------------------+
  35. | 2002 | 35 | 2481 |
  36. +-----------+----------------+--------------------------------+
  37. | 2003 | 35 | 2552 |
  38. +-----------+----------------+--------------------------------+
  39. | 2004 | 40 | 2584 |
  40. +-----------+----------------+--------------------------------+
  41. | 2005 | 38 | 2647 |
  42. +-----------+----------------+--------------------------------+
  43. | 2006 | 52 | 3754 |
  44. +-----------+----------------+--------------------------------+
  45. | 2007 | 66 | 4398 |
  46. +-----------+----------------+--------------------------------+
  47. | 2008 | 71 | 4768 |
  48. +-----------+----------------+--------------------------------+
  49. | 2009 | 73 | 5715 |
  50. +-----------+----------------+--------------------------------+
  51. This review is an attempt to catalog and address all the known
  52. issues with TDB and create solutions which address the problems
  53. without significantly increasing complexity; all involved are far
  54. too aware of the dangers of second system syndrome in rewriting a
  55. successful project like this.
  56. Note: the final decision was to make ntdb a separate library,
  57. with a separarate 'ntdb' namespace so both can potentially be
  58. linked together. This document still refers to “tdb” everywhere,
  59. for simplicity.
  60. 2 API Issues
  61. 2.1 tdb_open_ex Is Not Expandable
  62. The tdb_open() call was expanded to tdb_open_ex(), which added an
  63. optional hashing function and an optional logging function
  64. argument. Additional arguments to open would require the
  65. introduction of a tdb_open_ex2 call etc.
  66. 2.1.1 Proposed Solution<attributes>
  67. tdb_open() will take a linked-list of attributes:
  68. enum tdb_attribute {
  69. TDB_ATTRIBUTE_LOG = 0,
  70. TDB_ATTRIBUTE_HASH = 1
  71. };
  72. struct tdb_attribute_base {
  73. enum tdb_attribute attr;
  74. union tdb_attribute *next;
  75. };
  76. struct tdb_attribute_log {
  77. struct tdb_attribute_base base; /* .attr = TDB_ATTRIBUTE_LOG
  78. */
  79. tdb_log_func log_fn;
  80. void *log_private;
  81. };
  82. struct tdb_attribute_hash {
  83. struct tdb_attribute_base base; /* .attr = TDB_ATTRIBUTE_HASH
  84. */
  85. tdb_hash_func hash_fn;
  86. void *hash_private;
  87. };
  88. union tdb_attribute {
  89. struct tdb_attribute_base base;
  90. struct tdb_attribute_log log;
  91. struct tdb_attribute_hash hash;
  92. };
  93. This allows future attributes to be added, even if this expands
  94. the size of the union.
  95. 2.1.2 Status
  96. Complete.
  97. 2.2 tdb_traverse Makes Impossible Guarantees
  98. tdb_traverse (and tdb_firstkey/tdb_nextkey) predate transactions,
  99. and it was thought that it was important to guarantee that all
  100. records which exist at the start and end of the traversal would
  101. be included, and no record would be included twice.
  102. This adds complexity (see[Reliable-Traversal-Adds]) and does not
  103. work anyway for records which are altered (in particular, those
  104. which are expanded may be effectively deleted and re-added behind
  105. the traversal).
  106. 2.2.1 <traverse-Proposed-Solution>Proposed Solution
  107. Abandon the guarantee. You will see every record if no changes
  108. occur during your traversal, otherwise you will see some subset.
  109. You can prevent changes by using a transaction or the locking
  110. API.
  111. 2.2.2 Status
  112. Complete. Delete-during-traverse will still delete every record,
  113. too (assuming no other changes).
  114. 2.3 Nesting of Transactions Is Fraught
  115. TDB has alternated between allowing nested transactions and not
  116. allowing them. Various paths in the Samba codebase assume that
  117. transactions will nest, and in a sense they can: the operation is
  118. only committed to disk when the outer transaction is committed.
  119. There are two problems, however:
  120. 1. Canceling the inner transaction will cause the outer
  121. transaction commit to fail, and will not undo any operations
  122. since the inner transaction began. This problem is soluble with
  123. some additional internal code.
  124. 2. An inner transaction commit can be cancelled by the outer
  125. transaction. This is desirable in the way which Samba's
  126. database initialization code uses transactions, but could be a
  127. surprise to any users expecting a successful transaction commit
  128. to expose changes to others.
  129. The current solution is to specify the behavior at tdb_open(),
  130. with the default currently that nested transactions are allowed.
  131. This flag can also be changed at runtime.
  132. 2.3.1 Proposed Solution
  133. Given the usage patterns, it seems that the“least-surprise”
  134. behavior of disallowing nested transactions should become the
  135. default. Additionally, it seems the outer transaction is the only
  136. code which knows whether inner transactions should be allowed, so
  137. a flag to indicate this could be added to tdb_transaction_start.
  138. However, this behavior can be simulated with a wrapper which uses
  139. tdb_add_flags() and tdb_remove_flags(), so the API should not be
  140. expanded for this relatively-obscure case.
  141. 2.3.2 Status
  142. Complete; the nesting flag has been removed.
  143. 2.4 Incorrect Hash Function is Not Detected
  144. tdb_open_ex() allows the calling code to specify a different hash
  145. function to use, but does not check that all other processes
  146. accessing this tdb are using the same hash function. The result
  147. is that records are missing from tdb_fetch().
  148. 2.4.1 Proposed Solution
  149. The header should contain an example hash result (eg. the hash of
  150. 0xdeadbeef), and tdb_open_ex() should check that the given hash
  151. function produces the same answer, or fail the tdb_open call.
  152. 2.4.2 Status
  153. Complete.
  154. 2.5 tdb_set_max_dead/TDB_VOLATILE Expose Implementation
  155. In response to scalability issues with the free list ([TDB-Freelist-Is]
  156. ) two API workarounds have been incorporated in TDB:
  157. tdb_set_max_dead() and the TDB_VOLATILE flag to tdb_open. The
  158. latter actually calls the former with an argument of“5”.
  159. This code allows deleted records to accumulate without putting
  160. them in the free list. On delete we iterate through each chain
  161. and free them in a batch if there are more than max_dead entries.
  162. These are never otherwise recycled except as a side-effect of a
  163. tdb_repack.
  164. 2.5.1 Proposed Solution
  165. With the scalability problems of the freelist solved, this API
  166. can be removed. The TDB_VOLATILE flag may still be useful as a
  167. hint that store and delete of records will be at least as common
  168. as fetch in order to allow some internal tuning, but initially
  169. will become a no-op.
  170. 2.5.2 Status
  171. Complete. Unknown flags cause tdb_open() to fail as well, so they
  172. can be detected at runtime.
  173. 2.6 <TDB-Files-Cannot>TDB Files Cannot Be Opened Multiple Times
  174. In The Same Process
  175. No process can open the same TDB twice; we check and disallow it.
  176. This is an unfortunate side-effect of fcntl locks, which operate
  177. on a per-file rather than per-file-descriptor basis, and do not
  178. nest. Thus, closing any file descriptor on a file clears all the
  179. locks obtained by this process, even if they were placed using a
  180. different file descriptor!
  181. Note that even if this were solved, deadlock could occur if
  182. operations were nested: this is a more manageable programming
  183. error in most cases.
  184. 2.6.1 Proposed Solution
  185. We could lobby POSIX to fix the perverse rules, or at least lobby
  186. Linux to violate them so that the most common implementation does
  187. not have this restriction. This would be a generally good idea
  188. for other fcntl lock users.
  189. Samba uses a wrapper which hands out the same tdb_context to
  190. multiple callers if this happens, and does simple reference
  191. counting. We should do this inside the tdb library, which already
  192. emulates lock nesting internally; it would need to recognize when
  193. deadlock occurs within a single process. This would create a new
  194. failure mode for tdb operations (while we currently handle
  195. locking failures, they are impossible in normal use and a process
  196. encountering them can do little but give up).
  197. I do not see benefit in an additional tdb_open flag to indicate
  198. whether re-opening is allowed, as though there may be some
  199. benefit to adding a call to detect when a tdb_context is shared,
  200. to allow other to create such an API.
  201. 2.6.2 Status
  202. Complete.
  203. 2.7 TDB API Is Not POSIX Thread-safe
  204. The TDB API uses an error code which can be queried after an
  205. operation to determine what went wrong. This programming model
  206. does not work with threads, unless specific additional guarantees
  207. are given by the implementation. In addition, even
  208. otherwise-independent threads cannot open the same TDB (as in[TDB-Files-Cannot]
  209. ).
  210. 2.7.1 Proposed Solution
  211. Reachitecting the API to include a tdb_errcode pointer would be a
  212. great deal of churn, but fortunately most functions return 0 on
  213. success and -1 on error: we can change these to return 0 on
  214. success and a negative error code on error, and the API remains
  215. similar to previous. The tdb_fetch, tdb_firstkey and tdb_nextkey
  216. functions need to take a TDB_DATA pointer and return an error
  217. code. It is also simpler to have tdb_nextkey replace its key
  218. argument in place, freeing up any old .dptr.
  219. Internal locking is required to make sure that fcntl locks do not
  220. overlap between threads, and also that the global list of tdbs is
  221. maintained.
  222. The aim is that building tdb with -DTDB_PTHREAD will result in a
  223. pthread-safe version of the library, and otherwise no overhead
  224. will exist. Alternatively, a hooking mechanism similar to that
  225. proposed for[Proposed-Solution-locking-hook] could be used to
  226. enable pthread locking at runtime.
  227. 2.7.2 Status
  228. Incomplete; API has been changed but thread safety has not been
  229. implemented.
  230. 2.8 *_nonblock Functions And *_mark Functions Expose
  231. Implementation
  232. CTDB[footnote:
  233. Clustered TDB, see http://ctdb.samba.org
  234. ] wishes to operate on TDB in a non-blocking manner. This is
  235. currently done as follows:
  236. 1. Call the _nonblock variant of an API function (eg.
  237. tdb_lockall_nonblock). If this fails:
  238. 2. Fork a child process, and wait for it to call the normal
  239. variant (eg. tdb_lockall).
  240. 3. If the child succeeds, call the _mark variant to indicate we
  241. already have the locks (eg. tdb_lockall_mark).
  242. 4. Upon completion, tell the child to release the locks (eg.
  243. tdb_unlockall).
  244. 5. Indicate to tdb that it should consider the locks removed (eg.
  245. tdb_unlockall_mark).
  246. There are several issues with this approach. Firstly, adding two
  247. new variants of each function clutters the API for an obscure
  248. use, and so not all functions have three variants. Secondly, it
  249. assumes that all paths of the functions ask for the same locks,
  250. otherwise the parent process will have to get a lock which the
  251. child doesn't have under some circumstances. I don't believe this
  252. is currently the case, but it constrains the implementation.
  253. 2.8.1 <Proposed-Solution-locking-hook>Proposed Solution
  254. Implement a hook for locking methods, so that the caller can
  255. control the calls to create and remove fcntl locks. In this
  256. scenario, ctdbd would operate as follows:
  257. 1. Call the normal API function, eg tdb_lockall().
  258. 2. When the lock callback comes in, check if the child has the
  259. lock. Initially, this is always false. If so, return 0.
  260. Otherwise, try to obtain it in non-blocking mode. If that
  261. fails, return EWOULDBLOCK.
  262. 3. Release locks in the unlock callback as normal.
  263. 4. If tdb_lockall() fails, see if we recorded a lock failure; if
  264. so, call the child to repeat the operation.
  265. 5. The child records what locks it obtains, and returns that
  266. information to the parent.
  267. 6. When the child has succeeded, goto 1.
  268. This is flexible enough to handle any potential locking scenario,
  269. even when lock requirements change. It can be optimized so that
  270. the parent does not release locks, just tells the child which
  271. locks it doesn't need to obtain.
  272. It also keeps the complexity out of the API, and in ctdbd where
  273. it is needed.
  274. 2.8.2 Status
  275. Complete.
  276. 2.9 tdb_chainlock Functions Expose Implementation
  277. tdb_chainlock locks some number of records, including the record
  278. indicated by the given key. This gave atomicity guarantees;
  279. no-one can start a transaction, alter, read or delete that key
  280. while the lock is held.
  281. It also makes the same guarantee for any other key in the chain,
  282. which is an internal implementation detail and potentially a
  283. cause for deadlock.
  284. 2.9.1 Proposed Solution
  285. None. It would be nice to have an explicit single entry lock
  286. which effected no other keys. Unfortunately, this won't work for
  287. an entry which doesn't exist. Thus while chainlock may be
  288. implemented more efficiently for the existing case, it will still
  289. have overlap issues with the non-existing case. So it is best to
  290. keep the current (lack of) guarantee about which records will be
  291. effected to avoid constraining our implementation.
  292. 2.10 Signal Handling is Not Race-Free
  293. The tdb_setalarm_sigptr() call allows the caller's signal handler
  294. to indicate that the tdb locking code should return with a
  295. failure, rather than trying again when a signal is received (and
  296. errno == EAGAIN). This is usually used to implement timeouts.
  297. Unfortunately, this does not work in the case where the signal is
  298. received before the tdb code enters the fcntl() call to place the
  299. lock: the code will sleep within the fcntl() code, unaware that
  300. the signal wants it to exit. In the case of long timeouts, this
  301. does not happen in practice.
  302. 2.10.1 Proposed Solution
  303. The locking hooks proposed in[Proposed-Solution-locking-hook]
  304. would allow the user to decide on whether to fail the lock
  305. acquisition on a signal. This allows the caller to choose their
  306. own compromise: they could narrow the race by checking
  307. immediately before the fcntl call.[footnote:
  308. It may be possible to make this race-free in some implementations
  309. by having the signal handler alter the struct flock to make it
  310. invalid. This will cause the fcntl() lock call to fail with
  311. EINVAL if the signal occurs before the kernel is entered,
  312. otherwise EAGAIN.
  313. ]
  314. 2.10.2 Status
  315. Complete.
  316. 2.11 The API Uses Gratuitous Typedefs, Capitals
  317. typedefs are useful for providing source compatibility when types
  318. can differ across implementations, or arguably in the case of
  319. function pointer definitions which are hard for humans to parse.
  320. Otherwise it is simply obfuscation and pollutes the namespace.
  321. Capitalization is usually reserved for compile-time constants and
  322. macros.
  323. TDB_CONTEXT There is no reason to use this over 'struct
  324. tdb_context'; the definition isn't visible to the API user
  325. anyway.
  326. TDB_DATA There is no reason to use this over struct TDB_DATA;
  327. the struct needs to be understood by the API user.
  328. struct TDB_DATA This would normally be called 'struct
  329. tdb_data'.
  330. enum TDB_ERROR Similarly, this would normally be enum
  331. tdb_error.
  332. 2.11.1 Proposed Solution
  333. None. Introducing lower case variants would please pedants like
  334. myself, but if it were done the existing ones should be kept.
  335. There is little point forcing a purely cosmetic change upon tdb
  336. users.
  337. 2.12 <tdb_log_func-Doesnt-Take>tdb_log_func Doesn't Take The
  338. Private Pointer
  339. For API compatibility reasons, the logging function needs to call
  340. tdb_get_logging_private() to retrieve the pointer registered by
  341. the tdb_open_ex for logging.
  342. 2.12.1 Proposed Solution
  343. It should simply take an extra argument, since we are prepared to
  344. break the API/ABI.
  345. 2.12.2 Status
  346. Complete.
  347. 2.13 Various Callback Functions Are Not Typesafe
  348. The callback functions in tdb_set_logging_function (after[tdb_log_func-Doesnt-Take]
  349. is resolved), tdb_parse_record, tdb_traverse, tdb_traverse_read
  350. and tdb_check all take void * and must internally convert it to
  351. the argument type they were expecting.
  352. If this type changes, the compiler will not produce warnings on
  353. the callers, since it only sees void *.
  354. 2.13.1 Proposed Solution
  355. With careful use of macros, we can create callback functions
  356. which give a warning when used on gcc and the types of the
  357. callback and its private argument differ. Unsupported compilers
  358. will not give a warning, which is no worse than now. In addition,
  359. the callbacks become clearer, as they need not use void * for
  360. their parameter.
  361. See CCAN's typesafe_cb module at
  362. http://ccan.ozlabs.org/info/typesafe_cb.html
  363. 2.13.2 Status
  364. Complete.
  365. 2.14 TDB_CLEAR_IF_FIRST Must Be Specified On All Opens,
  366. tdb_reopen_all Problematic
  367. The TDB_CLEAR_IF_FIRST flag to tdb_open indicates that the TDB
  368. file should be cleared if the caller discovers it is the only
  369. process with the TDB open. However, if any caller does not
  370. specify TDB_CLEAR_IF_FIRST it will not be detected, so will have
  371. the TDB erased underneath them (usually resulting in a crash).
  372. There is a similar issue on fork(); if the parent exits (or
  373. otherwise closes the tdb) before the child calls tdb_reopen_all()
  374. to establish the lock used to indicate the TDB is opened by
  375. someone, a TDB_CLEAR_IF_FIRST opener at that moment will believe
  376. it alone has opened the TDB and will erase it.
  377. 2.14.1 Proposed Solution
  378. Remove TDB_CLEAR_IF_FIRST. Other workarounds are possible, but
  379. see[TDB_CLEAR_IF_FIRST-Imposes-Performance].
  380. 2.14.2 Status
  381. Complete. An open hook is provided to replicate this
  382. functionality if required.
  383. 2.15 Extending The Header Is Difficult
  384. We have reserved (zeroed) words in the TDB header, which can be
  385. used for future features. If the future features are compulsory,
  386. the version number must be updated to prevent old code from
  387. accessing the database. But if the future feature is optional, we
  388. have no way of telling if older code is accessing the database or
  389. not.
  390. 2.15.1 Proposed Solution
  391. The header should contain a“format variant” value (64-bit). This
  392. is divided into two 32-bit parts:
  393. 1. The lower part reflects the format variant understood by code
  394. accessing the database.
  395. 2. The upper part reflects the format variant you must understand
  396. to write to the database (otherwise you can only open for
  397. reading).
  398. The latter field can only be written at creation time, the former
  399. should be written under the OPEN_LOCK when opening the database
  400. for writing, if the variant of the code is lower than the current
  401. lowest variant.
  402. This should allow backwards-compatible features to be added, and
  403. detection if older code (which doesn't understand the feature)
  404. writes to the database.
  405. 2.15.2 Status
  406. Complete.
  407. 2.16 Record Headers Are Not Expandible
  408. If we later want to add (say) checksums on keys and data, it
  409. would require another format change, which we'd like to avoid.
  410. 2.16.1 Proposed Solution
  411. We often have extra padding at the tail of a record. If we ensure
  412. that the first byte (if any) of this padding is zero, we will
  413. have a way for future changes to detect code which doesn't
  414. understand a new format: the new code would write (say) a 1 at
  415. the tail, and thus if there is no tail or the first byte is 0, we
  416. would know the extension is not present on that record.
  417. 2.16.2 Status
  418. Complete.
  419. 2.17 TDB Does Not Use Talloc
  420. Many users of TDB (particularly Samba) use the talloc allocator,
  421. and thus have to wrap TDB in a talloc context to use it
  422. conveniently.
  423. 2.17.1 Proposed Solution
  424. The allocation within TDB is not complicated enough to justify
  425. the use of talloc, and I am reluctant to force another
  426. (excellent) library on TDB users. Nonetheless a compromise is
  427. possible. An attribute (see[attributes]) can be added later to
  428. tdb_open() to provide an alternate allocation mechanism,
  429. specifically for talloc but usable by any other allocator (which
  430. would ignore the“context” argument).
  431. This would form a talloc heirarchy as expected, but the caller
  432. would still have to attach a destructor to the tdb context
  433. returned from tdb_open to close it. All TDB_DATA fields would be
  434. children of the tdb_context, and the caller would still have to
  435. manage them (using talloc_free() or talloc_steal()).
  436. 2.17.2 Status
  437. Complete, using the NTDB_ATTRIBUTE_ALLOCATOR attribute.
  438. 3 Performance And Scalability Issues
  439. 3.1 <TDB_CLEAR_IF_FIRST-Imposes-Performance>TDB_CLEAR_IF_FIRST
  440. Imposes Performance Penalty
  441. When TDB_CLEAR_IF_FIRST is specified, a 1-byte read lock is
  442. placed at offset 4 (aka. the ACTIVE_LOCK). While these locks
  443. never conflict in normal tdb usage, they do add substantial
  444. overhead for most fcntl lock implementations when the kernel
  445. scans to detect if a lock conflict exists. This is often a single
  446. linked list, making the time to acquire and release a fcntl lock
  447. O(N) where N is the number of processes with the TDB open, not
  448. the number actually doing work.
  449. In a Samba server it is common to have huge numbers of clients
  450. sitting idle, and thus they have weaned themselves off the
  451. TDB_CLEAR_IF_FIRST flag.[footnote:
  452. There is a flag to tdb_reopen_all() which is used for this
  453. optimization: if the parent process will outlive the child, the
  454. child does not need the ACTIVE_LOCK. This is a workaround for
  455. this very performance issue.
  456. ]
  457. 3.1.1 Proposed Solution
  458. Remove the flag. It was a neat idea, but even trivial servers
  459. tend to know when they are initializing for the first time and
  460. can simply unlink the old tdb at that point.
  461. 3.1.2 Status
  462. Complete.
  463. 3.2 TDB Files Have a 4G Limit
  464. This seems to be becoming an issue (so much for“trivial”!),
  465. particularly for ldb.
  466. 3.2.1 Proposed Solution
  467. A new, incompatible TDB format which uses 64 bit offsets
  468. internally rather than 32 bit as now. For simplicity of endian
  469. conversion (which TDB does on the fly if required), all values
  470. will be 64 bit on disk. In practice, some upper bits may be used
  471. for other purposes, but at least 56 bits will be available for
  472. file offsets.
  473. tdb_open() will automatically detect the old version, and even
  474. create them if TDB_VERSION6 is specified to tdb_open.
  475. 32 bit processes will still be able to access TDBs larger than 4G
  476. (assuming that their off_t allows them to seek to 64 bits), they
  477. will gracefully fall back as they fail to mmap. This can happen
  478. already with large TDBs.
  479. Old versions of tdb will fail to open the new TDB files (since 28
  480. August 2009, commit 398d0c29290: prior to that any unrecognized
  481. file format would be erased and initialized as a fresh tdb!)
  482. 3.2.2 Status
  483. Complete.
  484. 3.3 TDB Records Have a 4G Limit
  485. This has not been a reported problem, and the API uses size_t
  486. which can be 64 bit on 64 bit platforms. However, other limits
  487. may have made such an issue moot.
  488. 3.3.1 Proposed Solution
  489. Record sizes will be 64 bit, with an error returned on 32 bit
  490. platforms which try to access such records (the current
  491. implementation would return TDB_ERR_OOM in a similar case). It
  492. seems unlikely that 32 bit keys will be a limitation, so the
  493. implementation may not support this (see[sub:Records-Incur-A]).
  494. 3.3.2 Status
  495. Complete.
  496. 3.4 Hash Size Is Determined At TDB Creation Time
  497. TDB contains a number of hash chains in the header; the number is
  498. specified at creation time, and defaults to 131. This is such a
  499. bottleneck on large databases (as each hash chain gets quite
  500. long), that LDB uses 10,000 for this hash. In general it is
  501. impossible to know what the 'right' answer is at database
  502. creation time.
  503. 3.4.1 <sub:Hash-Size-Solution>Proposed Solution
  504. After comprehensive performance testing on various scalable hash
  505. variants[footnote:
  506. http://rusty.ozlabs.org/?p=89 and http://rusty.ozlabs.org/?p=94
  507. This was annoying because I was previously convinced that an
  508. expanding tree of hashes would be very close to optimal.
  509. ], it became clear that it is hard to beat a straight linear hash
  510. table which doubles in size when it reaches saturation.
  511. Unfortunately, altering the hash table introduces serious locking
  512. complications: the entire hash table needs to be locked to
  513. enlarge the hash table, and others might be holding locks.
  514. Particularly insidious are insertions done under tdb_chainlock.
  515. Thus an expanding layered hash will be used: an array of hash
  516. groups, with each hash group exploding into pointers to lower
  517. hash groups once it fills, turning into a hash tree. This has
  518. implications for locking: we must lock the entire group in case
  519. we need to expand it, yet we don't know how deep the tree is at
  520. that point.
  521. Note that bits from the hash table entries should be stolen to
  522. hold more hash bits to reduce the penalty of collisions. We can
  523. use the otherwise-unused lower 3 bits. If we limit the size of
  524. the database to 64 exabytes, we can use the top 8 bits of the
  525. hash entry as well. These 11 bits would reduce false positives
  526. down to 1 in 2000 which is more than we need: we can use one of
  527. the bits to indicate that the extra hash bits are valid. This
  528. means we can choose not to re-hash all entries when we expand a
  529. hash group; simply use the next bits we need and mark them
  530. invalid.
  531. 3.4.2 Status
  532. Ignore. Scaling the hash automatically proved inefficient at
  533. small hash sizes; we default to a 8192-element hash (changable
  534. via NTDB_ATTRIBUTE_HASHSIZE), and when buckets clash we expand to
  535. an array of hash entries. This scales slightly better than the
  536. tdb chain (due to the 8 top bits containing extra hash).
  537. 3.5 <TDB-Freelist-Is>TDB Freelist Is Highly Contended
  538. TDB uses a single linked list for the free list. Allocation
  539. occurs as follows, using heuristics which have evolved over time:
  540. 1. Get the free list lock for this whole operation.
  541. 2. Multiply length by 1.25, so we always over-allocate by 25%.
  542. 3. Set the slack multiplier to 1.
  543. 4. Examine the current freelist entry: if it is > length but <
  544. the current best case, remember it as the best case.
  545. 5. Multiply the slack multiplier by 1.05.
  546. 6. If our best fit so far is less than length * slack multiplier,
  547. return it. The slack will be turned into a new free record if
  548. it's large enough.
  549. 7. Otherwise, go onto the next freelist entry.
  550. Deleting a record occurs as follows:
  551. 1. Lock the hash chain for this whole operation.
  552. 2. Walk the chain to find the record, keeping the prev pointer
  553. offset.
  554. 3. If max_dead is non-zero:
  555. (a) Walk the hash chain again and count the dead records.
  556. (b) If it's more than max_dead, bulk free all the dead ones
  557. (similar to steps 4 and below, but the lock is only obtained
  558. once).
  559. (c) Simply mark this record as dead and return.
  560. 4. Get the free list lock for the remainder of this operation.
  561. 5. <right-merging>Examine the following block to see if it is
  562. free; if so, enlarge the current block and remove that block
  563. from the free list. This was disabled, as removal from the free
  564. list was O(entries-in-free-list).
  565. 6. Examine the preceeding block to see if it is free: for this
  566. reason, each block has a 32-bit tailer which indicates its
  567. length. If it is free, expand it to cover our new block and
  568. return.
  569. 7. Otherwise, prepend ourselves to the free list.
  570. Disabling right-merging (step[right-merging]) causes
  571. fragmentation; the other heuristics proved insufficient to
  572. address this, so the final answer to this was that when we expand
  573. the TDB file inside a transaction commit, we repack the entire
  574. tdb.
  575. The single list lock limits our allocation rate; due to the other
  576. issues this is not currently seen as a bottleneck.
  577. 3.5.1 Proposed Solution
  578. The first step is to remove all the current heuristics, as they
  579. obviously interact, then examine them once the lock contention is
  580. addressed.
  581. The free list must be split to reduce contention. Assuming
  582. perfect free merging, we can at most have 1 free list entry for
  583. each entry. This implies that the number of free lists is related
  584. to the size of the hash table, but as it is rare to walk a large
  585. number of free list entries we can use far fewer, say 1/32 of the
  586. number of hash buckets.
  587. It seems tempting to try to reuse the hash implementation which
  588. we use for records here, but we have two ways of searching for
  589. free entries: for allocation we search by size (and possibly
  590. zone) which produces too many clashes for our hash table to
  591. handle well, and for coalescing we search by address. Thus an
  592. array of doubly-linked free lists seems preferable.
  593. There are various benefits in using per-size free lists (see[sub:TDB-Becomes-Fragmented]
  594. ) but it's not clear this would reduce contention in the common
  595. case where all processes are allocating/freeing the same size.
  596. Thus we almost certainly need to divide in other ways: the most
  597. obvious is to divide the file into zones, and using a free list
  598. (or table of free lists) for each. This approximates address
  599. ordering.
  600. Unfortunately it is difficult to know what heuristics should be
  601. used to determine zone sizes, and our transaction code relies on
  602. being able to create a“recovery area” by simply appending to the
  603. file (difficult if it would need to create a new zone header).
  604. Thus we use a linked-list of free tables; currently we only ever
  605. create one, but if there is more than one we choose one at random
  606. to use. In future we may use heuristics to add new free tables on
  607. contention. We only expand the file when all free tables are
  608. exhausted.
  609. The basic algorithm is as follows. Freeing is simple:
  610. 1. Identify the correct free list.
  611. 2. Lock the corresponding list.
  612. 3. Re-check the list (we didn't have a lock, sizes could have
  613. changed): relock if necessary.
  614. 4. Place the freed entry in the list.
  615. Allocation is a little more complicated, as we perform delayed
  616. coalescing at this point:
  617. 1. Pick a free table; usually the previous one.
  618. 2. Lock the corresponding list.
  619. 3. If the top entry is -large enough, remove it from the list and
  620. return it.
  621. 4. Otherwise, coalesce entries in the list.If there was no entry
  622. large enough, unlock the list and try the next largest list
  623. 5. If no list has an entry which meets our needs, try the next
  624. free table.
  625. 6. If no zone satisfies, expand the file.
  626. This optimizes rapid insert/delete of free list entries by not
  627. coalescing them all the time.. First-fit address ordering
  628. ordering seems to be fairly good for keeping fragmentation low
  629. (see[sub:TDB-Becomes-Fragmented]). Note that address ordering
  630. does not need a tailer to coalesce, though if we needed one we
  631. could have one cheaply: see[sub:Records-Incur-A].
  632. Each free entry has the free table number in the header: less
  633. than 255. It also contains a doubly-linked list for easy
  634. deletion.
  635. 3.6 <sub:TDB-Becomes-Fragmented>TDB Becomes Fragmented
  636. Much of this is a result of allocation strategy[footnote:
  637. The Memory Fragmentation Problem: Solved? Johnstone & Wilson 1995
  638. ftp://ftp.cs.utexas.edu/pub/garbage/malloc/ismm98.ps
  639. ] and deliberate hobbling of coalescing; internal fragmentation
  640. (aka overallocation) is deliberately set at 25%, and external
  641. fragmentation is only cured by the decision to repack the entire
  642. db when a transaction commit needs to enlarge the file.
  643. 3.6.1 Proposed Solution
  644. The 25% overhead on allocation works in practice for ldb because
  645. indexes tend to expand by one record at a time. This internal
  646. fragmentation can be resolved by having an“expanded” bit in the
  647. header to note entries that have previously expanded, and
  648. allocating more space for them.
  649. There are is a spectrum of possible solutions for external
  650. fragmentation: one is to use a fragmentation-avoiding allocation
  651. strategy such as best-fit address-order allocator. The other end
  652. of the spectrum would be to use a bump allocator (very fast and
  653. simple) and simply repack the file when we reach the end.
  654. There are three problems with efficient fragmentation-avoiding
  655. allocators: they are non-trivial, they tend to use a single free
  656. list for each size, and there's no evidence that tdb allocation
  657. patterns will match those recorded for general allocators (though
  658. it seems likely).
  659. Thus we don't spend too much effort on external fragmentation; we
  660. will be no worse than the current code if we need to repack on
  661. occasion. More effort is spent on reducing freelist contention,
  662. and reducing overhead.
  663. 3.7 <sub:Records-Incur-A>Records Incur A 28-Byte Overhead
  664. Each TDB record has a header as follows:
  665. struct tdb_record {
  666. tdb_off_t next; /* offset of the next record in the list
  667. */
  668. tdb_len_t rec_len; /* total byte length of record */
  669. tdb_len_t key_len; /* byte length of key */
  670. tdb_len_t data_len; /* byte length of data */
  671. uint32_t full_hash; /* the full 32 bit hash of the key */
  672. uint32_t magic; /* try to catch errors */
  673. /* the following union is implied:
  674. union {
  675. char record[rec_len];
  676. struct {
  677. char key[key_len];
  678. char data[data_len];
  679. }
  680. uint32_t totalsize; (tailer)
  681. }
  682. */
  683. };
  684. Naively, this would double to a 56-byte overhead on a 64 bit
  685. implementation.
  686. 3.7.1 Proposed Solution
  687. We can use various techniques to reduce this for an allocated
  688. block:
  689. 1. The 'next' pointer is not required, as we are using a flat
  690. hash table.
  691. 2. 'rec_len' can instead be expressed as an addition to key_len
  692. and data_len (it accounts for wasted or overallocated length in
  693. the record). Since the record length is always a multiple of 8,
  694. we can conveniently fit it in 32 bits (representing up to 35
  695. bits).
  696. 3. 'key_len' and 'data_len' can be reduced. I'm unwilling to
  697. restrict 'data_len' to 32 bits, but instead we can combine the
  698. two into one 64-bit field and using a 5 bit value which
  699. indicates at what bit to divide the two. Keys are unlikely to
  700. scale as fast as data, so I'm assuming a maximum key size of 32
  701. bits.
  702. 4. 'full_hash' is used to avoid a memcmp on the“miss” case, but
  703. this is diminishing returns after a handful of bits (at 10
  704. bits, it reduces 99.9% of false memcmp). As an aside, as the
  705. lower bits are already incorporated in the hash table
  706. resolution, the upper bits should be used here. Note that it's
  707. not clear that these bits will be a win, given the extra bits
  708. in the hash table itself (see[sub:Hash-Size-Solution]).
  709. 5. 'magic' does not need to be enlarged: it currently reflects
  710. one of 5 values (used, free, dead, recovery, and
  711. unused_recovery). It is useful for quick sanity checking
  712. however, and should not be eliminated.
  713. 6. 'tailer' is only used to coalesce free blocks (so a block to
  714. the right can find the header to check if this block is free).
  715. This can be replaced by a single 'free' bit in the header of
  716. the following block (and the tailer only exists in free
  717. blocks).[footnote:
  718. This technique from Thomas Standish. Data Structure Techniques.
  719. Addison-Wesley, Reading, Massachusetts, 1980.
  720. ] The current proposed coalescing algorithm doesn't need this,
  721. however.
  722. This produces a 16 byte used header like this:
  723. struct tdb_used_record {
  724. uint32_t used_magic : 16,
  725. key_data_divide: 5,
  726. top_hash: 11;
  727. uint32_t extra_octets;
  728. uint64_t key_and_data_len;
  729. };
  730. And a free record like this:
  731. struct tdb_free_record {
  732. uint64_t free_magic: 8,
  733. prev : 56;
  734. uint64_t free_table: 8,
  735. total_length : 56
  736. uint64_t next;;
  737. };
  738. Note that by limiting valid offsets to 56 bits, we can pack
  739. everything we need into 3 64-byte words, meaning our minimum
  740. record size is 8 bytes.
  741. 3.7.2 Status
  742. Complete.
  743. 3.8 Transaction Commit Requires 4 fdatasync
  744. The current transaction algorithm is:
  745. 1. write_recovery_data();
  746. 2. sync();
  747. 3. write_recovery_header();
  748. 4. sync();
  749. 5. overwrite_with_new_data();
  750. 6. sync();
  751. 7. remove_recovery_header();
  752. 8. sync();
  753. On current ext3, each sync flushes all data to disk, so the next
  754. 3 syncs are relatively expensive. But this could become a
  755. performance bottleneck on other filesystems such as ext4.
  756. 3.8.1 Proposed Solution
  757. Neil Brown points out that this is overzealous, and only one sync
  758. is needed:
  759. 1. Bundle the recovery data, a transaction counter and a strong
  760. checksum of the new data.
  761. 2. Strong checksum that whole bundle.
  762. 3. Store the bundle in the database.
  763. 4. Overwrite the oldest of the two recovery pointers in the
  764. header (identified using the transaction counter) with the
  765. offset of this bundle.
  766. 5. sync.
  767. 6. Write the new data to the file.
  768. Checking for recovery means identifying the latest bundle with a
  769. valid checksum and using the new data checksum to ensure that it
  770. has been applied. This is more expensive than the current check,
  771. but need only be done at open. For running databases, a separate
  772. header field can be used to indicate a transaction in progress;
  773. we need only check for recovery if this is set.
  774. 3.8.2 Status
  775. Deferred.
  776. 3.9 <sub:TDB-Does-Not>TDB Does Not Have Snapshot Support
  777. 3.9.1 Proposed Solution
  778. None. At some point you say“use a real database” (but see[replay-attribute]
  779. ).
  780. But as a thought experiment, if we implemented transactions to
  781. only overwrite free entries (this is tricky: there must not be a
  782. header in each entry which indicates whether it is free, but use
  783. of presence in metadata elsewhere), and a pointer to the hash
  784. table, we could create an entirely new commit without destroying
  785. existing data. Then it would be easy to implement snapshots in a
  786. similar way.
  787. This would not allow arbitrary changes to the database, such as
  788. tdb_repack does, and would require more space (since we have to
  789. preserve the current and future entries at once). If we used hash
  790. trees rather than one big hash table, we might only have to
  791. rewrite some sections of the hash, too.
  792. We could then implement snapshots using a similar method, using
  793. multiple different hash tables/free tables.
  794. 3.9.2 Status
  795. Deferred.
  796. 3.10 Transactions Cannot Operate in Parallel
  797. This would be useless for ldb, as it hits the index records with
  798. just about every update. It would add significant complexity in
  799. resolving clashes, and cause the all transaction callers to write
  800. their code to loop in the case where the transactions spuriously
  801. failed.
  802. 3.10.1 Proposed Solution
  803. None (but see[replay-attribute]). We could solve a small part of
  804. the problem by providing read-only transactions. These would
  805. allow one write transaction to begin, but it could not commit
  806. until all r/o transactions are done. This would require a new
  807. RO_TRANSACTION_LOCK, which would be upgraded on commit.
  808. 3.10.2 Status
  809. Deferred.
  810. 3.11 Default Hash Function Is Suboptimal
  811. The Knuth-inspired multiplicative hash used by tdb is fairly slow
  812. (especially if we expand it to 64 bits), and works best when the
  813. hash bucket size is a prime number (which also means a slow
  814. modulus). In addition, it is highly predictable which could
  815. potentially lead to a Denial of Service attack in some TDB uses.
  816. 3.11.1 Proposed Solution
  817. The Jenkins lookup3 hash[footnote:
  818. http://burtleburtle.net/bob/c/lookup3.c
  819. ] is a fast and superbly-mixing hash. It's used by the Linux
  820. kernel and almost everything else. This has the particular
  821. properties that it takes an initial seed, and produces two 32 bit
  822. hash numbers, which we can combine into a 64-bit hash.
  823. The seed should be created at tdb-creation time from some random
  824. source, and placed in the header. This is far from foolproof, but
  825. adds a little bit of protection against hash bombing.
  826. 3.11.2 Status
  827. Complete.
  828. 3.12 <Reliable-Traversal-Adds>Reliable Traversal Adds Complexity
  829. We lock a record during traversal iteration, and try to grab that
  830. lock in the delete code. If that grab on delete fails, we simply
  831. mark it deleted and continue onwards; traversal checks for this
  832. condition and does the delete when it moves off the record.
  833. If traversal terminates, the dead record may be left
  834. indefinitely.
  835. 3.12.1 Proposed Solution
  836. Remove reliability guarantees; see[traverse-Proposed-Solution].
  837. 3.12.2 Status
  838. Complete.
  839. 3.13 Fcntl Locking Adds Overhead
  840. Placing a fcntl lock means a system call, as does removing one.
  841. This is actually one reason why transactions can be faster
  842. (everything is locked once at transaction start). In the
  843. uncontended case, this overhead can theoretically be eliminated.
  844. 3.13.1 Proposed Solution
  845. None.
  846. We tried this before with spinlock support, in the early days of
  847. TDB, and it didn't make much difference except in manufactured
  848. benchmarks.
  849. We could use spinlocks (with futex kernel support under Linux),
  850. but it means that we lose automatic cleanup when a process dies
  851. with a lock. There is a method of auto-cleanup under Linux, but
  852. it's not supported by other operating systems. We could
  853. reintroduce a clear-if-first-style lock and sweep for dead
  854. futexes on open, but that wouldn't help the normal case of one
  855. concurrent opener dying. Increasingly elaborate repair schemes
  856. could be considered, but they require an ABI change (everyone
  857. must use them) anyway, so there's no need to do this at the same
  858. time as everything else.
  859. 3.14 Some Transactions Don't Require Durability
  860. Volker points out that gencache uses a CLEAR_IF_FIRST tdb for
  861. normal (fast) usage, and occasionally empties the results into a
  862. transactional TDB. This kind of usage prioritizes performance
  863. over durability: as long as we are consistent, data can be lost.
  864. This would be more neatly implemented inside tdb: a“soft”
  865. transaction commit (ie. syncless) which meant that data may be
  866. reverted on a crash.
  867. 3.14.1 Proposed Solution
  868. None.
  869. Unfortunately any transaction scheme which overwrites old data
  870. requires a sync before that overwrite to avoid the possibility of
  871. corruption.
  872. It seems possible to use a scheme similar to that described in[sub:TDB-Does-Not]
  873. ,where transactions are committed without overwriting existing
  874. data, and an array of top-level pointers were available in the
  875. header. If the transaction is“soft” then we would not need a sync
  876. at all: existing processes would pick up the new hash table and
  877. free list and work with that.
  878. At some later point, a sync would allow recovery of the old data
  879. into the free lists (perhaps when the array of top-level pointers
  880. filled). On crash, tdb_open() would examine the array of top
  881. levels, and apply the transactions until it encountered an
  882. invalid checksum.
  883. 3.15 Tracing Is Fragile, Replay Is External
  884. The current TDB has compile-time-enabled tracing code, but it
  885. often breaks as it is not enabled by default. In a similar way,
  886. the ctdb code has an external wrapper which does replay tracing
  887. so it can coordinate cluster-wide transactions.
  888. 3.15.1 Proposed Solution<replay-attribute>
  889. Tridge points out that an attribute can be later added to
  890. tdb_open (see[attributes]) to provide replay/trace hooks, which
  891. could become the basis for this and future parallel transactions
  892. and snapshot support.
  893. 3.15.2 Status
  894. Deferred.