Bases: Lexer
, ABC
Bases: BlockLexer
Bases: BlockLexer
, ABC
Bases: SectionLexer
Bases: SectionLexer
Bases: SectionLexer
Bases: SectionLexer
Bases: SettingSectionLexer
Bases: SectionLexer
Bases: SectionLexer
Bases: SectionLexer
Bases: BlockLexer
, ABC
Bases: TestOrKeywordLexer
Bases: TestOrKeywordLexer
Bases: BlockLexer
, ABC
Bases: NestedBlockLexer
Bases: NestedBlockLexer
Bases: NestedBlockLexer
Bases: NestedBlockLexer
Bases: NestedBlockLexer
Bases: NestedBlockLexer
Bases: object
Bases: LexingContext
Bases: FileContext
Bases: FileContext
Bases: FileContext
Bases: LexingContext
Bases: LexingContext
Parses the given source to tokens.
source – The source where to read the data. Can be a path to a source file as a string or as pathlib.Path
object, an already opened file object, or Unicode text containing the date directly. Source files must be UTF-8 encoded.
data_only – When False
(default), returns all tokens. When set to True
, omits separators, comments, continuation markers, and other non-data tokens.
tokenize_variables – When True
, possible variables in keyword arguments and elsewhere are tokenized. See the tokenize_variables()
method for details.
lang – Additional languages to be supported during parsing. Can be a string matching any of the supported language codes or names, an initialized Language
subclass, a list containing such strings or instances, or a Languages
instance.
Returns a generator that yields Token
instances.
Parses the given source to resource file tokens.
Same as get_tokens()
otherwise, but the source is considered to be a resource file. This affects, for example, what settings are valid.
Parses the given source to init file tokens.
Same as get_tokens()
otherwise, but the source is considered to be a suite initialization file. This affects, for example, what settings are valid.
Bases: object
Bases: ABC
Bases: Settings
, ABC
Bases: FileSettings
Bases: FileSettings
Bases: FileSettings
Bases: Settings
Bases: Settings
Bases: ABC
Bases: Lexer
, ABC
Bases: StatementLexer
, ABC
Bases: StatementLexer
, ABC
Bases: SingleType
, ABC
Bases: SectionHeaderLexer
Bases: SectionHeaderLexer
Bases: SectionHeaderLexer
Bases: SectionHeaderLexer
Bases: SectionHeaderLexer
Bases: SectionHeaderLexer
Bases: SectionHeaderLexer
Bases: SingleType
Bases: CommentLexer
Bases: StatementLexer
Bases: StatementLexer
Bases: StatementLexer
Bases: TypeAndArguments
Bases: StatementLexer
Bases: StatementLexer
Bases: TypeAndArguments
Bases: StatementLexer
Bases: TypeAndArguments
Bases: TypeAndArguments
Bases: TypeAndArguments
Bases: StatementLexer
Bases: TypeAndArguments
Bases: StatementLexer
Bases: TypeAndArguments
Bases: TypeAndArguments
Bases: StatementLexer
Bases: TypeAndArguments
Bases: TypeAndArguments
Bases: TypeAndArguments
Bases: TypeAndArguments
Bases: object
Bases: object
Token representing piece of Robot Framework data.
Each token has type, value, line number, column offset and end column offset in type
, value
, lineno
, col_offset
and end_col_offset
attributes, respectively. Tokens representing error also have their error message in error
attribute.
Token types are declared as class attributes such as SETTING_HEADER
and EOL
. Values of these constants have changed slightly in Robot Framework 4.0, and they may change again in the future. It is thus safer to use the constants, not their values, when types are needed. For example, use Token(Token.EOL)
instead of Token('EOL')
and token.type == Token.EOL
instead of token.type == 'EOL'
.
If value
is not given and type
is a special marker like IF
or :attr:`EOL, the value is set automatically.
Tokenizes possible variables in token value.
Yields the token itself if the token does not allow variables (see Token.ALLOW_VARIABLES
) or its value does not contain variables. Otherwise, yields variable tokens as well as tokens before, after, or between variables so that they have the same type as the original token.
Bases: Token
Token representing end of a statement.
Bases: Token
Token representing END token used to signify block ending.
Virtual END tokens have ‘’ as their value, with “real” END tokens the value is ‘END’.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4