How do you use sed from Perl?

I know how to use sed with grep, but within Perl the below fails. How can one get sed to work within a Perl program?

chomp (my @lineNumbers=`grep -n "textToFind" $fileToProcess | sed -n 's/^\([0-9]*\)[:].*/\1/p'`)


I'm surprised that nobody mentioned the s2p utility, which translates sed "scripts" (you know, most of the time oneliners) to valid perl. (And there's an a2p utility for awk too...)

Suggestion: Use Perl regular expressions and replacements instead of grep or sed.

It's approximentally the same syntax, but more powerful. Also in the end it will be more efficient than invoking the extra sed process.

Anything you need to do with grep or sed can be done natively in perl more easily. For instance (this is roughly right, but probably wrong):

my @linenumbers;
open FH "<$fileToProcess";
while (<FH>)
   next if (!m/textToFind/);
   push @lineNumbers, $_;

Supposedly Larry Wall wrote Perl because he found something that was impossible to do with sed and awk. The other answers have this right, use Perl regular expressions instead. Your code will have fewer external dependencies, be understandable to more people (Perl's user base is much bigger than sed user base), and your code will be cross-platform with no extra work.

Edit: Paul Tomblin relates an excellent story in his comment on my answer. I'm putting it here to increase it's prominence.

"Henry Spencer, who did some amazing things with Awk, claimed that after demoing some awk stuff to Larry Wall, Larry said he wouldn't have bothered with Perl if he'd known." – Paul Tomblin

Use power Luke:

$ echo -e "a\nb\na"|perl -lne'/a/&&print$.'

Thus when you want same think as this slow and overcomplicated grep and sed combination you can do it far simpler and faster in perl itself:

my @linenumbers;
open my $fh, '<', $fileToProcess or die "Can't open $fileToProcess: $!";
while (<$fh>)
   /textToFind/ and push @lineNumbers, $.;
close $fh;

Or with same memory culprits as the original solution

my @linenumbers = do {
    open my $fh, '<', $fileToProcess or die "Can't open $fileToProcess: $!";
    my $i;
    map { ( ++$i ) x /textToFind/ } <$fh>

If you had a large sed expression, you could use s2p, to convert it into a perl program.

If you run  <s2p 's/^\([0-9]*\)[:].*/\1/p'>, this is what you would get:

#!/opt/perl/bin/perl -w
eval 'exec /opt/perl/bin/perl -S $0 ${1+"$@"}'
  if 0;
$0 =~ s/^.*?(\w+)[\.\w+]*$/$1/;

use strict;
use Symbol;
use vars qw{ $isEOF $Hold %wFiles @Q $CondReg
         $doAutoPrint $doOpenWrite $doPrint };
$doAutoPrint = 1;
$doOpenWrite = 1;
# prototypes
sub openARGV();
sub getsARGV(;\$);
sub eofARGV();
sub printQ();

# Run: the sed loop reading input and applying the script
sub Run(){
    my( $h, $icnt, $s, $n );
    # hack (not unbreakable :-/) to avoid // matching an empty string
    my $z = "\000"; $z =~ /$z/;
    # Initialize.
    $Hold    = '';
    $CondReg = 0;
    $doPrint = $doAutoPrint;
    while( getsARGV() ){
    $CondReg = 0;   # cleared on t
# s/^\([0-9]*\)[:].*/\1/p
{ $s = s /^(\d*)[:].*/${1}/s;
  $CondReg ||= $s;
  print $_, "\n" if $s;
EOS:    if( $doPrint ){
            print $_, "\n";
        } else {
        $doPrint = $doAutoPrint;
        printQ() if @Q;

    exit( 0 );

# openARGV: open 1st input file
sub openARGV(){
    unshift( @ARGV, '-' ) unless @ARGV;
    my $file = shift( @ARGV );
    open( ARG, "<$file" )
    || die( "$0: can't open $file for reading ($!)\n" );
    $isEOF = 0;

# getsARGV: Read another input line into argument (default: $_).
#           Move on to next input file, and reset EOF flag $isEOF.
sub getsARGV(;\$){
    my $argref = @_ ? shift() : \$_; 
    while( $isEOF || ! defined( $$argref = <ARG> ) ){
    close( ARG );
    return 0 unless @ARGV;
    my $file = shift( @ARGV );
    open( ARG, "<$file" )
    || die( "$0: can't open $file for reading ($!)\n" );
    $isEOF = 0;

# eofARGV: end-of-file test
sub eofARGV(){
    return @ARGV == 0 && ( $isEOF = eof( ARG ) );

# makeHandle: Generates another file handle for some file (given by its path)
#             to be written due to a w command or an s command's w flag.
sub makeHandle($){
    my( $path ) = @_;
    my $handle;
    if( ! exists( $wFiles{$path} ) || $wFiles{$path} eq '' ){
        $handle = $wFiles{$path} = gensym();
    if( $doOpenWrite ){
        if( ! open( $handle, ">$path" ) ){
        die( "$0: can't open $path for writing: ($!)\n" );
    } else {
        $handle = $wFiles{$path};
    return $handle;

# printQ: Print queued output which is either a string or a reference
#         to a pathname.
sub printQ(){
    for my $q ( @Q ){
    if( ref( $q ) ){
            # flush open w files so that reading this file gets it all
        if( exists( $wFiles{$$q} ) && $wFiles{$$q} ne '' ){
        open( $wFiles{$$q}, ">>$$q" );
            # copy file to stdout: slow, but safe
        if( open( RF, "<$$q" ) ){
        while( defined( my $line = <RF> ) ){
            print $line;
        close( RF );
    } else {
        print $q;
    undef( @Q );

Not exactly worth doing on small expressions.

You can use

perl -pe 's/search/replace/g'

in place of

sed 's/search/replace/'

.. However ..

Those are meant for command line or shell scripts. Since youre already in a perl script, the correct answer was given by "Paul Tomblin" above.

Have fun,

Edited: OK, I fixed it now.

use File::Grep qw/fmap/;

my @lineNumbers = fmap { /$pattern/ ? $_[1] : () } $fileToProcess;

Here's how you can use Perl as a replacement for Sed:

Instead of:

sed "s/xxx/yyy/g" files_to_process


perl -i.bak -pe "s/xxx/yyy/g" files_to_process

This will modify the files in-place and make a backup (.bak) of each modified file.

It is easier to use Perl than to use grep and sed; see another answer.

Your code failed because Perl messed with the backslashes in your sed code. To prevent this, write your sed code in 'a single-quoted Perl string', then use \Q$sedCode\E to interpolate the code into the shell command. (About \Q...E, see perldoc -f quotemeta. Its usual purpose is to quote characters for regular expressions, but it also works with shell commands.)

my $fileToProcess = "example.txt";
my $sedCode = 's/^\([0-9]*\)[:].*/\1/p';
chomp(my @linenumbers =
      `grep -n "textToFind" \Q$fileToProcess\E | sed -n \Q$sedCode\E`);
printf "%s\n", join(', ', @linenumbers);

Given example.txt with

this has textToFind
this doesn't
textToFind again

the output is 1, 3.

Need Your Help

Merging two object (NodeList) arrays in JavaScript

javascript arrays object merge

I am attempting to merge two arrays of objects to so I can validate a form. The usual concat method does not appear to work in this circumstance. Concat works with ordinary numerical and string arr...

Is there a LINQ equivalent of string.Join(string, string[])

c# .net linq linq-to-sql

Is there any way to convert a collection of objects into a single new object using LINQ?