How to tokenize Perl source code?

I have some reasonable (not obfuscated) Perl source files, and I need a tokenizer, which will split it to tokens, and return the token type of each of them, e.g. for the script

print "Hello, World!\n";

it would return something like this:

  • keyword 5 bytes
  • whitespace 1 byte
  • double-quoted-string 17 bytes
  • semicolon 1 byte
  • whitespace 1 byte

Which is the best library (preferably written in Perl) for this? It has to be reasonably correct, i.e. it should be able to parse syntactic constructs like qq{{\}}}, but it doesn't have to know about special parsers like Lingua::Romana::Perligata. I know that parsing Perl is Turing-complete, and only Perl itself can do it right, but I don't need absolute correctness: the tokenizer can fail or be incompatible or assume some default in some very rare corner cases, but it should work correctly most of the time. It must be better than the syntax highlighting built into an average text editor.

FYI I tried the PerlLexer in pygments, which works reasonable for most constructs, except that it cannot find the 2nd print keyword in this one:

print length(<<"END"); print "\n";



use PPI;

Yes, only perl can parse Perl, however PPI is the 95% correct solution.

Need Your Help

ansible wget then exec scripts => get_url equivalent

bash curl ansible wget ansible-playbook

I always wonder what is the good way to replace the following shell tasks using the "ansible way" (with get_url, etc.):

Sort Javascript Object Array By Date

javascript datetime

Say I have an array of a few objects: