xonsh.parsers.lexer¶
Lexer for xonsh code.
Written using a hybrid of tokenize
and PLY.
- class xonsh.parsers.lexer.Lexer(tolerant=False, pymode=True)[source]¶
Implements a lexer for the xonsh language.
- Attributes:
- fnamestr
Filename
- lasttoken
The last token seen.
- linenoint
The last line number seen.
- tolerantbool
Tokenize without extra checks (e.g. paren matching). When True, ERRORTOKEN contains the erroneous string instead of an error msg.
- pymodebool
Start the lexer in Python mode.
- property tokens¶
- property tolerant¶
- xonsh.parsers.lexer.get_tokens(s, tolerant, pymode=True, tokenize_ioredirects=True)[source]¶
Given a string containing xonsh code, generates a stream of relevant PLY tokens using
handle_token
.
- xonsh.parsers.lexer.handle_error_linecont(state, token)[source]¶
Function for handling special line continuations as whitespace characters in subprocess mode.
- xonsh.parsers.lexer.handle_error_space(state, token)[source]¶
Function for handling special whitespace characters in subprocess mode
- xonsh.parsers.lexer.handle_ignore(state, token)[source]¶
Function for handling tokens that should be ignored
- xonsh.parsers.lexer.handle_token(state, token)[source]¶
General-purpose token handler. Makes use of
token_map
orspecial_map
to yield one or more PLY tokens from the given input.- Parameters:
- state
The current state of the lexer, including information about whether we are in Python mode or subprocess mode, which changes the lexer’s behavior. Also includes the stream of tokens yet to be considered.
- token
The token (from
tokenize
) currently under consideration