FAQ
Hello list,



Have a CSV file with the first column as unique which I am taking as hash
key and rest of the line as hash value. I am opening the file and putting to
hash after reading each line. There are two questions from me.



1) How to find the performance and time taken by Perl script?

2) Is there any optimal method for reading a CSV file and put to hash
table.



Note: This is only a part of script doing and I am supposed to do this with
out using any default modules.



Thanks

Manoj

Search Discussions

  • Prabu Ayyappan at Mar 14, 2008 at 5:26 am
    ----- Original Message ----
    From: Manoj <manojkumarg@dataone.in>
    To: beginners@perl.org
    Sent: Friday, March 14, 2008 12:09:42 AM
    Subject: Hash & CSV

    Hello list,



    Have a CSV file with the first column as unique which I am taking as hash
    key and rest of the line as hash value. I am opening the file and putting to
    hash after reading each line. There are two questions from me.



    1) How to find the performance and time taken by Perl script?

    2) Is there any optimal method for reading a CSV file and put to hash
    table.



    Note: This is only a part of script doing and I am supposed to do this with
    out using any default modules.
    Thanks
    Manoj
    ------------------------------------------------------------------------
    Hi Manoj


    1) How to find the performance and time taken by Perl script?

    For benchmarking the perl scripts you can use the Benchmark module.

    http://search.cpan.org/~rgarcia/perl-5.10.0/lib/Benchmark.pm

    2) Is there any optimal method for reading a CSV file and put to hash table.

    May have better approaches however this read CSV into Hash

    use Data::Dumper;
    open(INFILE, "<", "sample.csv") or die $!;
    my %hsh;
    %hsh = ( %hsh, (split(/,/, $_))[1,2] ) while ( <INFILE> );
    print Dumper \%hsh;

    Hope this helps,

    Thanks,
    Prabu







    ____________________________________________________________________________________
    Be a better friend, newshound, and
    know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
  • John W. Krahn at Mar 14, 2008 at 6:31 am

    Prabu Ayyappan wrote:

    From: Manoj <manojkumarg@dataone.in>
    Have a CSV file with the first column as unique which I am taking as hash
    key and rest of the line as hash value. I am opening the file and putting to
    hash after reading each line. There are two questions from me.

    1) How to find the performance and time taken by Perl script?
    2) Is there any optimal method for reading a CSV file and put to hash
    table.

    Note: This is only a part of script doing and I am supposed to do this with
    out using any default modules.
    1) How to find the performance and time taken by Perl script?

    For benchmarking the perl scripts you can use the Benchmark module.

    http://search.cpan.org/~rgarcia/perl-5.10.0/lib/Benchmark.pm
    Benchmark.pm *should* be installed as a standard module when Perl was
    installed. Check the "Standard Modules" section of perlmodlib.pod

    perldoc perlmodlib
    perldoc Benchmark

    2) Is there any optimal method for reading a CSV file and put to hash table.

    May have better approaches however this read CSV into Hash

    use Data::Dumper;
    open(INFILE, "<", "sample.csv") or die $!;
    my %hsh;
    %hsh = ( %hsh, (split(/,/, $_))[1,2] ) while ( <INFILE> );
    That is a *very* inefficient way to populate a hash as you are copying
    the entire hash for every record in the file. Better to add the keys
    and values individually:

    my %hsh;
    while ( <INFILE> ) {
    chomp;
    my ( $key, $value ) = split /,/;
    }

    print Dumper \%hsh;

    John
    --
    Perl isn't a toolbox, but a small machine shop where you
    can special-order certain sorts of tools at low cost and
    in short order. -- Larry Wall
  • John W. Krahn at Mar 14, 2008 at 6:44 am

    John W. Krahn wrote:
    Prabu Ayyappan wrote:
    From: Manoj <manojkumarg@dataone.in>
    Have a CSV file with the first column as unique which I am taking as
    hash
    key and rest of the line as hash value. I am opening the file and
    putting to
    hash after reading each line. There are two questions from me.
    1) How to find the performance and time taken by Perl script?
    2) Is there any optimal method for reading a CSV file and put
    to hash
    table.

    Note: This is only a part of script doing and I am supposed to do
    this with
    out using any default modules.
    1) How to find the performance and time taken by Perl script?

    For benchmarking the perl scripts you can use the Benchmark module.

    http://search.cpan.org/~rgarcia/perl-5.10.0/lib/Benchmark.pm
    Benchmark.pm *should* be installed as a standard module when Perl was
    installed. Check the "Standard Modules" section of perlmodlib.pod

    perldoc perlmodlib
    perldoc Benchmark

    2) Is there any optimal method for reading a CSV file and put to
    hash table.

    May have better approaches however this read CSV into Hash

    use Data::Dumper;
    open(INFILE, "<", "sample.csv") or die $!;
    my %hsh;
    %hsh = ( %hsh, (split(/,/, $_))[1,2] ) while ( <INFILE> );
    That is a *very* inefficient way to populate a hash as you are copying
    the entire hash for every record in the file. Better to add the keys
    and values individually:

    my %hsh;
    while ( <INFILE> ) {
    chomp;
    my ( $key, $value ) = split /,/;
    # Oops :-)
    $hsh{ $key } = $value;
    }

    print Dumper \%hsh;


    John
    --
    Perl isn't a toolbox, but a small machine shop where you
    can special-order certain sorts of tools at low cost and
    in short order. -- Larry Wall
  • Manoj at Mar 14, 2008 at 7:22 pm
    When using Data: Dumper is taking more time for my 10000 lines of CSV file.
    This solved a few queries...and the benchmark was a new value addition for
    me. Thanks





    -----Original Message-----
    From: John W. Krahn
    Sent: Friday, March 14, 2008 12:01 PM
    To: Perl Beginners
    Subject: Re: Hash & CSV

    Prabu Ayyappan wrote:
    From: Manoj <manojkumarg@dataone.in>
    Have a CSV file with the first column as unique which I am taking as hash
    key and rest of the line as hash value. I am opening the file and putting
    to
    hash after reading each line. There are two questions from me.

    1) How to find the performance and time taken by Perl script?
    2) Is there any optimal method for reading a CSV file and put to
    hash
    table.

    Note: This is only a part of script doing and I am supposed to do this
    with
    out using any default modules.
    1) How to find the performance and time taken by Perl script?

    For benchmarking the perl scripts you can use the Benchmark module.

    http://search.cpan.org/~rgarcia/perl-5.10.0/lib/Benchmark.pm
    Benchmark.pm *should* be installed as a standard module when Perl was
    installed. Check the "Standard Modules" section of perlmodlib.pod

    perldoc perlmodlib
    perldoc Benchmark

    2) Is there any optimal method for reading a CSV file and put to
    hash table.
    May have better approaches however this read CSV into Hash

    use Data::Dumper;
    open(INFILE, "<", "sample.csv") or die $!;
    my %hsh;
    %hsh = ( %hsh, (split(/,/, $_))[1,2] ) while ( <INFILE> );
    That is a *very* inefficient way to populate a hash as you are copying
    the entire hash for every record in the file. Better to add the keys
    and values individually:

    my %hsh;
    while ( <INFILE> ) {
    chomp;
    my ( $key, $value ) = split /,/;
    }

    print Dumper \%hsh;

    John
    --
    Perl isn't a toolbox, but a small machine shop where you
    can special-order certain sorts of tools at low cost and
    in short order. -- Larry Wall

    --
    To unsubscribe, e-mail: beginners-unsubscribe@perl.org
    For additional commands, e-mail: beginners-help@perl.org
    http://learn.perl.org/
  • JBallinger at Mar 17, 2008 at 3:49 pm

    On Mar 14, 3:26 pm, manojkum...@dataone.in (Manoj) wrote:
    When using Data: Dumper is taking more time for my 10000 lines of CSV file.
    This solved a few queries...and the benchmark was a new value addition for
    me. Thanks
    2)       Is there any optimal method for reading a CSV file and put to
    hash table.
    May have better approaches however this read CSV into Hash
    use Data::Dumper;
    open(INFILE, "<", "sample.csv") or die $!;
    my %hsh;
    %hsh = ( %hsh, (split(/,/, $_))[1,2] ) while ( <INFILE> );
    That is a *very* inefficient way to populate a hash as you are copying
    the entire hash for every record in the file.  Better to add the keys
    and values individually:

    my %hsh;
    while ( <INFILE> ) {
    chomp;
    my ( $key, $value ) = split /,/;
    }
    print Dumper \%hsh;
    John
    - Show quoted text -
    Suppose my csv file has 5 columns: f id fa mo ge
    Will my ($key, $value) = split/,/; still work?
    Is there any other mothod that is more efficient?

    JB
  • Rob Dixon at Mar 18, 2008 at 3:56 pm

    JBallinger wrote:
    On Mar 14, 3:26 pm, manojkum...@dataone.in (Manoj) wrote:

    When using Data: Dumper is taking more time for my 10000 lines of CSV file.
    This solved a few queries...and the benchmark was a new value addition for
    me. Thanks
    2) Is there any optimal method for reading a CSV file and put to
    hash table.
    May have better approaches however this read CSV into Hash
    use Data::Dumper;
    open(INFILE, "<", "sample.csv") or die $!;
    my %hsh;
    %hsh = ( %hsh, (split(/,/, $_))[1,2] ) while ( <INFILE> );
    That is a *very* inefficient way to populate a hash as you are copying
    the entire hash for every record in the file. Better to add the keys
    and values individually:

    my %hsh;
    while ( <INFILE> ) {
    chomp;
    my ( $key, $value ) = split /,/;
    }

    print Dumper \%hsh;
    Suppose my csv file has 5 columns: f id fa mo ge
    Will my ($key, $value) = split/,/; still work?
    Is there any other mothod that is more efficient?
    A hash element can have only one key and one value, but that value can
    be an anonymous array. Something like this may do what you want.

    while (<INFILE>) {
    my ($key, @vals) = split /,/;
    $hash{$key} = \@vals;
    }

    HTH,

    Rob
  • Prabu Ayyappan at Mar 18, 2008 at 11:03 am
    ----- Original Message ----
    From: JBallinger <ballingerjohns@gmail.com>
    To: beginners@perl.org
    Sent: Monday, March 17, 2008 9:18:12 PM
    Subject: Re: Hash & CSV
    On Mar 14, 3:26 pm, manojkum...@dataone.in (Manoj) wrote:
    When using Data: Dumper is taking more time for my 10000 lines of CSV file.
    This solved a few queries...and the benchmark was a new value addition for
    me. Thanks
    2) Is there any optimal method for reading a CSV file and put to
    hash table.
    May have better approaches however this read CSV into Hash
    use Data::Dumper;
    open(INFILE, "<", "sample.csv") or die $!;
    my %hsh;
    %hsh = ( %hsh, (split(/,/, $_))[1,2] ) while ( <INFILE> );
    That is a *very* inefficient way to populate a hash as you are copying
    the entire hash for every record in the file. Better to add the keys
    and values individually:

    my %hsh;
    while ( <INFILE> ) {
    chomp;
    my ( $key, $value ) = split /,/;
    }
    print Dumper \%hsh;
    John
    - Show quoted text -
    Suppose my csv file has 5 columns: f id fa mo ge
    Will my ($key, $value) = split/,/; still work?
    Is there any other mothod that is more efficient?

    JB


    Hi,

    There is a module use AnyData;
    which we can use for different format of files for CSV , here is a sample piece of code

    use AnyData;
    use Data::Dumper;
    $format= 'CSV';
    $data = 'sample.csv';
    $open_mode = 'r';
    # Open file in read/update mode
    my $table = adTie( $format, $data, $open_mode,);
    print Dumper %$table;

    Hope this helps.

    Best Regards,
    Prabu


    ____________________________________________________________________________________
    Be a better friend, newshound, and
    know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupbeginners @
categoriesperl
postedMar 13, '08 at 6:35p
activeMar 18, '08 at 3:56p
posts8
users5
websiteperl.org

People

Translate

site design / logo © 2022 Grokbase