The token type analyzer calls TokenClassifierBase.ClassifyIdentifier for each identifier.
private TokenInfo ClassifyIdentifier(SyntaxToken token) { if (semanticModel.GetDeclaredSymbol(token.Parent) is { } declaration) { return ClassifyIdentifier(token, declaration); } else if (GetBindableParent(token) is { } parent && semanticModel.GetSymbolInfo(parent).Symbol is { } symbol) { return ClassifyIdentifier(token, symbol); } else { return null; } }This method calls semanticModel.GetDeclaredSymbol(token.Parent)
and optionally semanticModel.GetSymbolInfo(token.Parent)
for the identifier token. Therefore an ISymbol
is created, and a mapping from SyntaxNode
to ISymbol
is added to the semantic model. This adds a lot of pressure to any shared semantic model, as the ISymbol
and the mapping need to be cached by the semantic model in a thread-safe manner. The code snippet below shows how many identifiers are present in a simple code snippet.
using System; // +1 using System.Collections.Generic; // +3 namespace A.B.C; // +3 public class D // +1 { public D() // +1 { } public void M() // +1 { List<D> myList; // +3 } }
TokenClassifierBase.ClassifyIdentifier
can only have two outcomes:
This classification can often be done on a syntactical level. In the sample above, all identifiers can be classified without querying the semantic model, saving 20 calls to the semantic model (13 identifiers, where 6 are declarations -> 13 + 7) and the allocation of 11 symbols.
To do a proper classification on the syntax level, the test infrastructure needs to be extended and made more powerful. #7289 describes how to do it, and #7108 implements this infrastructure. Therefore, this issue is blocked by #7289
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4