Replies: 3 comments 6 replies
-
|
The question is, for what size of 'large file' should we optimize? For me, 500 lines is a really large file and I rarely edit files larger than that. But recently, someone commented that it was slow editing a 4000 lines file. As to the example, some obvious improvement is of course possible, but these kind of things have to be done on a case-by-case basis. Interesting suggestion, it would result in less tokens, but where do you think this would be a performance improvement? Not for Actually, the biggest performance problem we have is reading indexes, in particular stub indexes. To read all indexed commands takes like 30 seconds or so if texlive-full is indexed, and for other project-specific indexes it can also take a second or so easily, I suspect because they have to be read from disk. To work around it I have created a whole lot of caches littered everywhere, but these are difficult to keep fresh, and creates the problem of when to fill them (in the background). I have no idea what would be a proper solution for this. |
Beta Was this translation helpful? Give feedback.
-
|
Here is a lengthy tex document for test, which is full of texts and math commands. (rename the suffix as Github does not allow uploading .tex document). |
Beta Was this translation helpful? Give feedback.
-
|
After #4087, I think that the main load of on the UI thread is the parser, which can hardly be improved unless we try to improve the lexer and PSI structure. Regarding the plain texts, as far as I tested, we can indeed benefit a lot from merging words, especially when the document is mainly made of plain texts. Consider PlainTexts.txt. Simply removing |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
The editor appears to be very laggy for relatively large tex files and I have been profiling the performance of the plugin to figure out why.
It seems that the current performance of the plugin can be greatly improved by refactoring the PSI related codes.
Optimizing PSI operations
On one hand, I found that some current codes operating on PSI can be improved, even with some minor changes.
Take the following fragment in
LatexSectionFoldingBuilder.buildFoldRegionsfor example,which took about 10% of the CPU time in my run profiling.
I think it has the following problems:
root.childrenOfType<LatexCommands>traverse the whole PSI tree and is very time-consuming, but it is called three times. The latter two calls return the same list and can be merged.List.filteroperations can be merged, avoiding intermediate list creation.As a result,
LatexSectionFoldingBuilder.buildFoldRegionsI believe it would improve a lot if we can optimize/rewrite the related codes.
Improving PSI structure
I have inspected the current construction of PSI tree and I found that plain words are represented by separate PSI elements.
I think that this can lead to extensive but unnecessary PSI elements.
Take the following as an example:
We will get one PSI element for each digit:

However, as they are merely plain texts, I think we should assign only one element as a whole, which is true if we inspect the PSI for Java string literal.
I think merging these PSI elements would greatly reduce the size of PSI tree and improve the performance.
Beta Was this translation helpful? Give feedback.
All reactions