This skips a lot of validation for being "good enough" (comment
open/closes matching, etc).
This has a chance of incorrectly matching in languages with `@` as an
operator, like Python, but Python encourages spaces arround operators,
so hopefully this won't be a problem.
For now, we hardcoded a min length of 90 bytes to ensure to avoid
ambiguity with math operations on variables (generally people use
whitespace anyways).
Fixes#287
We might be able to make this bail our earlier and not accidentally
detect the wrong thing by checking if the hex values are lowercase. RFC
4122 says that UUIDs must be generated lowecase, while input accepts
any case. The main issues are risk on the "input" part and the extra
annoyance of writing a custm `is_hex_digit` function.
This is prep for other items to be ignored
BREAKING CHANGE: `TokenizerBuilder` no longer takes config for ignoring
tokens. Related, we now ignore token-ignore config flags.
The main goal is to support replacing the parser with `nom` where I need
access to `str` only functionality.
With crates like simdutf8, this might also offer up performance gains
since they see the biggest benefit when doing large blocks of
validation.
- We aren't consistent in quoting words
- We used byte offsets rather than column counts
- We mixed styles between disallowed and corrections
Fixes#165
This switches us from a homegrown implementation to `context_inspector`
- Adds some optimizations by looking for the BoM.
- We used the same algorithm for finding Null bytes
- `context_inspector` caps how much of the buffer is searche though
Besides performance, `content_inspector` also has some known-binary
magic numbers to avoid bad detections.
Fixes#34
Some of the other spell checkers already do this. While I've not checked
where we might need it for our dictionary, this will be important for
dialects.