todo 6.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177
  1. Write test for empty_char_constant
  2. defined cannot be used as a macro name
  3. <strike>Add "defined" and only accept it in appropriate circumstances</strike>
  4. Update that simple tokenizer compulsory test so things will compile
  5. Handle cases like escaped question marks and pound symbols that I don't understand yet.
  6. (done) Fix #include <stdio.h> to read include directive correctly
  7. txt/orig state of affairs:
  8. The problem is that there are two ways to interpret line,col:
  9. With respect to txt
  10. With respect to orig
  11. This isn't a problem when txt and orig point to the same character, as in:
  12. int in\
  13. dex
  14. int \
  15. index /*Here, the backslash break should be gobbled up by the space identifier*/
  16. line,col has no ambiguity as to where it should point. However, when they point to different characters (i.e. at the beginning of a line):
  17. \
  18. int index
  19. line,col could either point to orig or to the first real character. Thus, we will do the latter.
  20. Moreover, will a newline followed by backslash breaks generate a token that gobbles up said breaks? I believe it will, but no need to call this mandatory.
  21. Thus, on a lookup with a txt pointer, the line/col/orig should match the real character and not preceding backslash breaks.
  22. I've been assuming that every token starts with its first character, neglecting the case where a line starts with backslash breaks. The question is, given the txt pointer to the first character, where should the derived orig land?
  23. Currently, the orig lands after the beginning backslash breaks, when instead it should probably land before them.
  24. Here's what the tokenizer's text anchoring needs:
  25. Broken/unbroken text pointer -> line/col
  26. Unbroken contents per token to identify identifier text
  27. Original contents per token to rebuild the document
  28. Ability to change "original contents" so the document will be saved with modifications
  29. Ability to insert new tokens
  30. Solution:
  31. New tokens will typically have identical txt and orig, yea even the same pointer.
  32. txt/txt_size for unbroken contents, orig/orig_size for original
  33. modify orig to change the document
  34. txt identifies identifier text
  35. Line lookup tables are used to resolve txt/orig pointers; other pointers can't be resolved in the same fashion and may require traversing backward through the list.
  36. What this means:
  37. Token txt/txt_size, orig/orig_size, orig_lines, txt_lines, and tok_point_lookup are all still correct.
  38. Token line,col will be removed
  39. Other improvements to do:
  40. Sanity check the point lookups like crazy
  41. Remove the array() structures in token_list, as these are supposed to be read-only
  42. Make sure tok_point_lookup returns correct values for every single pointer possible, particularly those in orig that are on backslash-breaks
  43. Convert the tok_message_queue into an array of messages bound to tokens.
  44. Ask Rusty about the trailing newline in this case:
  45. /* Blah
  46. *
  47. * blah
  48. */
  49. Here, rather than the trailing space being blank, it is "blank" from the comment perspective.
  50. May require deeper analysis.
  51. Todos from ccan_tokenizer.h
  52. /*
  53. Assumption: Every token fits in one and exactly one line
  54. Counterexamples:
  55. Backslash-broken lines
  56. Multiline comments
  57. Checks to implement in the tokenizer:
  58. is the $ character used in an identifier (some configurations of GCC allow this)
  59. are there potentially ambiguous sequences used in a string literal (e.g. "\0000")
  60. Are there stray characters? (e.g. '\0', '@', '\b')
  61. Are there trailing spaces at the end of lines (unless said spaces consume the entire line)?
  62. Are there trailing spaces after a backslash-broken line?
  63. Fixes todo:
  64. backslash-newline sequence should register as an empty character, and the tokenizer's line value should be incremented accordingly.
  65. */
  66. Lex angle bracket strings in #include
  67. Check the rules in the documentation
  68. Examine the message queue as part of testing the tokenizer:
  69. Make sure there are no bug messages
  70. Make sure files compile with no warnings
  71. For the tokenizer sanity check, make sure integers and floats have valid suffixes respectively
  72. (e.g. no TOK_F for an integer, no TOK_ULL for a floating)
  73. Update the scan_number sanity checks
  74. (done) Move scan_number et al. to a separate C file
  75. Test:
  76. Overflow and underflow floats
  77. 0x.p0
  78. (done) 0755f //octal 0755 with invalid suffix
  79. (done) 0755e1 //floating 7550
  80. Figure out how keywords will be handled.
  81. Preprocessor directives are <strike>case-insensitive</strike> actually case-sensitive (except __VA_ARGS__)
  82. All C keywords are case sensitive
  83. __VA_ARGS__ should be read as an identifier unless it's in the expansion of a macro. Otherwise, GCC generates a warning.
  84. We are in the expansion of a macro after <startline> <space> # <space>
  85. Don't forget about __attribute__
  86. Except for __VA_ARGS__, all preprocessor keywords are proceeded by <startline> <space> # <space>
  87. Solution:
  88. All the words themselves will go into one opkw dictionary, and for both type and opkw, no distinction will be made between preprocessor and normal keywords.
  89. Instead, int type will become short type; unsigned short cpp:1;
  90. Merge
  91. Commit ccan_tokenizer to the ccan repo
  92. Introduce ccan_tokenizer to ccanlint
  93. Write testcases for scanning all available operators
  94. Support integer and floating point suffices (e.g. 500UL, 0.5f)
  95. Examine the message queue after tokenizing
  96. Make sure single-character operators have an opkw < 128
  97. Make sure c_dictionary has no duplicate entries
  98. Write verifiers for other types than TOK_WHITE
  99. What's been done:
  100. Operator table has been organized
  101. Merged Rusty's changes
  102. Fixed if -> while in finalize
  103. Fixed a couple mistakes in run-simple-token.c testcases themselves
  104. Expected orig/orig_size sizes weren't right
  105. Made token_list_sanity_check a public function and used it throughout run-simple-token.c
  106. Tests succeed and pass valgrind
  107. Lines/columns of every token are recorded
  108. (done) Fix "0\nstatic"
  109. (done) Write tests to make sure backslash-broken lines have correct token locations.
  110. (done) Correctly handle backslash-broken lines
  111. One plan: Separate the scanning code from the reading code. Scanning sends valid ranges to reading, and reading fills valid tokens for the tokenizer/scanner to properly add
  112. Another plan: Un-break backslash-broken lines into another copy of the input. Create an array of the positions of each real line break so
  113. Annotate message queue messages with current token
  114. Conversion to make:
  115. From:
  116. Position in unbroken text
  117. To:
  118. Real line number
  119. Real offset from start of line
  120. Thus, we want an array of real line start locations wrt the unbroken text
  121. Here is a bro\
  122. ken line. Here is a
  123. real line.
  124. <LINE>Here is a bro<LINE>ken line. Here is a
  125. <LINE>real line.
  126. If we know the position of the token text wrt the unbroken text, we can look up the real line number and offset using only the array of real line start positions within the unbroken text.
  127. Because all we need is the orig and orig_size with respect to the unbroken text to orient