If escape sequences follow straight after each other, there is no
delimiter in-between.
In such a case, parsing previously stopped and did not find any
typos further in the file.
Hard to say how to handle `doen't` since we don't handle contractions.
For now, I've gone ahead and added corrections to the part of the
contraction. Hopefully that doesn't confuse people
Part of #362
By experimentation (see ticket), it seems that same-case hexadecimal
strings of 32 characters or longer are almost never intended to hold
text. By treating such strings as ignored, we can resist a larger
category of false positives.
Closes#326.
Since our goal is 100% confidence in the results, its better to not
check words than to correct the wrong words.
With that in mind, we'll ignore words after what might be c-escape
sequences (`\nfoo`) or printf substitutions (`%dfoo`).
Fixes#3
This cuts varcon lookup times in half but I still suspect slower than
phf. Like with bsearch and unlike, the cost is consistent between hits
and misses.
At least this doesn't have the compile hit of PHF + unicase. Maybe I
should experiment with integrating a non-const-fn variant of unicase
with PHF and give up on all of this extra complexity.
Before, only some dicts did we guarentee were pre-sorted. Now, all are
for-sure pre-sorted.
This also gives each dict the size-check to avoid lookup.
But this is really about refactoring in prep for playing with other
lookup options, like tries.
This skips a lot of validation for being "good enough" (comment
open/closes matching, etc).
This has a chance of incorrectly matching in languages with `@` as an
operator, like Python, but Python encourages spaces arround operators,
so hopefully this won't be a problem.
For now, we hardcoded a min length of 90 bytes to ensure to avoid
ambiguity with math operations on variables (generally people use
whitespace anyways).
Fixes#287
We might be able to make this bail our earlier and not accidentally
detect the wrong thing by checking if the hex values are lowercase. RFC
4122 says that UUIDs must be generated lowecase, while input accepts
any case. The main issues are risk on the "input" part and the extra
annoyance of writing a custm `is_hex_digit` function.
This is prep for other items to be ignored
BREAKING CHANGE: `TokenizerBuilder` no longer takes config for ignoring
tokens. Related, we now ignore token-ignore config flags.
This dropped RSS (memory usage) from 4GB to 1.5GB when compiling.
The extra `match` could impact performance but not too concerned since
the default is to not look within vars.
This is mostly to give implementation flexibility for changing out how
we store the data to reduce compilation memory usage.
This does have performance impact, jumping from ~220ns to ~320ns for a
dict lookup, according to our micro benchmarks.
Variant support slows us down by 10-50$. I assume most people will run
with `en` and so most of this overhead is to waste. So instead of
merging vars with dict, let's instead get a quick win by just skipping
vars when we don't need to. If the assumptions behind this change over
time or if there is need for speeding up a specific locale, we can
re-address this.
Before:
```
check_file/Typos/code time: [35.860 us 36.021 us 36.187 us]
thrpt: [8.0117 MiB/s 8.0486 MiB/s 8.0846 MiB/s]
check_file/Typos/corpus time: [26.966 ms 27.215 ms 27.521 ms]
thrpt: [21.127 MiB/s 21.365 MiB/s 21.562 MiB/s]
```
After:
```
check_file/Typos/code time: [33.837 us 33.928 us 34.031 us]
thrpt: [8.5191 MiB/s 8.5452 MiB/s 8.5680 MiB/s]
check_file/Typos/corpus time: [17.521 ms 17.620 ms 17.730 ms]
thrpt: [32.794 MiB/s 32.999 MiB/s 33.184 MiB/s]
```
This puts us inline with `--no-default-features --features dict`
Fixes#253
These were found while running `typos` on Linux and inspecting a
sampling of the results. #249 represents additional changes to make.
There were some identifiers, that looked like hardware registers, that
I'm unsure of what can be done for them.
The main goal is to support replacing the parser with `nom` where I need
access to `str` only functionality.
With crates like simdutf8, this might also offer up performance gains
since they see the biggest benefit when doing large blocks of
validation.
- We aren't consistent in quoting words
- We used byte offsets rather than column counts
- We mixed styles between disallowed and corrections
Fixes#165
Bypass hashing when we know (through str::len) that a word won't be in
the dict.
Master:
```
real 0m26.675s
user 0m33.683s
sys 0m4.535s
```
With this change
```
real 0m24.432s
user 0m32.492s
sys 0m4.190s
```
Bypass hashing when we know (through str::len) that a word won't be in
the dict.
Master:
```
real 0m26.675s
user 0m33.683s
sys 0m4.535s
```
With this change:
```
real 0m24.060s
user 0m31.559s
sys 0m4.258s
```
This switches us from a homegrown implementation to `context_inspector`
- Adds some optimizations by looking for the BoM.
- We used the same algorithm for finding Null bytes
- `context_inspector` caps how much of the buffer is searche though
Besides performance, `content_inspector` also has some known-binary
magic numbers to avoid bad detections.
Fixes#34
The goal is to be as accepting and unobtrusive to new code bases as
possible. To this end, we correct typos into the closest english
dialect.
If someone wants to opt-in, they can have typos correct to a specific
english dialect.
Fixes#52Fixes#22
Some of the other spell checkers already do this. While I've not checked
where we might need it for our dictionary, this will be important for
dialects.