Monday, August 19, 2013

Email Relay Testing With Perl

Here's a Perl module that can be used to test an email relay:

Net::SMTP

Apparently this module sets up a line of communication with a mail server. Once that line of communication has been established, it is possible to test that the mail server is able and willing to relay email.

I'm going to guess that there are many mail servers that are only willing to relay email that comes from localhost. Therefore, the Net::SMTP module would need to reside on the same physical box as the mail server itself. This is pure guesswork on my part.

If your perl script and Net::SMTP are both on the same physical box, then this line:

$server = 'localhost';

or this line:

$server = '127.0.0.1';

should work.

If not relaying email from the same host as the mail server itself, I'm going to guess that you will need a username and password to access the mail server. This may be difficult or impossible to obtain if you are working with a shared web hosting company.

The good news is that you may not need a username and password for your mail server if Perl is also installed on the same server. Specifically, if Net::SMTP and your mail server share the same localhost address, then it may be possible to communicate with the mail server by running a cgi-bin perl script from a web browser.

I find this web page extremely helpful for testing a mail relay via a web browser:

Mail relay testing with a perl script

By following the suggestions in the above web page, I was able to test the mail server at the hosting company via my browser. I used the script suggested in the above link to convince the hosting company they need to authorize the email address that I wanted to relay email to.

All this to get a simple FormMail.pl script working. I was testing the mail server to make sure it would relay a form filled in from a web page. It would not.

The above web page gave me a script, and a 500 level mail server error message, to prove that the web hosting company was not relaying emails to email addresses not on the approved list. The email address I needed to relay email to was not on the approved list.

It took a couple of weeks, but the hosting company has come through and has approved two email addresses for relay --- one email address is for testing purposes and the other email address is the client's email address that will ultimately receive the form.

I wrote about this experience here:

IIS and FormMail.pl and Perl Taint Mode and Email Relay

Ed Abbott

Friday, February 5, 2010

Happy Perl Utillities

This blog is about Perl
utilities I've been happy
to discover.

Here's one such utility:

Extract links from a file [perl]

Just in case the above link
ever disappears, I'll describe
it:

It is a perl untility that extracts
all the links out of an HTML file.
Of course, it is a command line utility.

Here's the usage line:

Usages: ./extractlink.pl filename.html

This utility relies on the following
perl module from CPAN:

HTML::SimpleLinkExtor -
Extract links from HTML


I learned about the above module, and
the script that works with it, by
finding out about this module first:

HTML::LinkExtor -
Extract links from an HTML document


Looks like LinkExtor is the
original module and SimpleLinkExtor
is the one I ended up using instead.

In other words, you do not need
LinkExtor to run the above
perl script.

Here are the steps I took to get this
to work. Note that I'm a Debian
Linux user:

  1. I already have Perl installed
    so no need for me to install Perl
  2. I copied and pasted the above
    script and placed it in a file called
    extract_links.
  3. I downloaded the
    SimpleLinkExtor
    module by clicking on the
    download link. I
    placed it in an empty
    directory called download.
  4. I did a cd to
    download:

    cd download
    
  5. I typed the following
    command to extract it:

    tar xzf HTML-SimpleLinkExtor-1.23.tar.gz
    
  6. Of course, the following
    directory was created:

    HTML-SimpleLinkExtor-1.23
    
  7. I did a cd to the
    following directory:

    cd HTML-SimpleLinkExtor-1.23
    

  8. I read the README file
  9. I did the following command
    as a regular user:

    perl Makefile.PL
    
  10. Next, I logged in as
    root in order to do
    the next few commands on
    a system-wide basis.
  11. I did the following
    three commands as root:

    make
    make test
    make install
    
  12. I logged out as root
    so as to go back to being
    a normal user

  13. Next, I gave my script execute
    permission:

    $ chmod u+x extract_links
    

  14. Since I was working with
    a website, I gathered all
    the HTML files into one
    file to make the HTML easy
    to work with:

    $ cat *html *htm >temp
    
  15. Next, I ran the utility
    against the temp file:

    extract_links temp >temp2
    

  16. Since I was looking for unique
    URLs in the file, I decided to
    put like URLs together by sorting
    them with the Unix sort
    utility:

    $ sort temp2 >temp3
    
This turned out to be a very very
happy Perl utility for me.

I found out that the website had
approximately 1750 links. This
was too many for me to deal with
by hand. This is what I needed
to find out.

I like SimpleLinkExtor and
the above script that relies
on SimpleLinkExtor.

I'm sure I'll find use for it
again in the future.

Ed Abbott