I have a set of files I am trying to import into MySQL.
Each CSV file looks like this:
Header1;Header2;Header3;Header4;Header5
Data1;Data2;Data3;Data4;Data5;
Data1;Data2;Data3;Data4;Data5;
Data1;Data2;Data3;Data4;Data5;
Data1;Data2;Data3;Data4;Data5;
Data may contain spaces, periods or a full colon. They absolutely will not contain a semi-colon so that is a valid delimiter. They also will not contain or any other newline characters.
2010.08.30 18:34:59
0.7508
String of characters with spaces in them
Each file has a unique name to it. The names all conform to the following pattern:
Token1_Token2_Token3.csv
I am interested in combining a lot of these CSV files (on the order of several hundred) into one CSV file. Files can range from 10KB to 400MB. Ultimately, I want to send it over to MySQL. Don't worry about getting rid of the individual header rows; I can do that in MySQL easily.
I would like the final CSV file to look like this:
Header1,Header2,Header3,Header4,Header5,FileName
Data1,Data2,Data3,Data4,Data5,Token1
Data1,Data2,Data3,Data4,Data5,Token1
Data1,Data2,Data3,Data4,Data5,Token1
Data1,Data2,Data3,Data4,Data5,Token1
Data1,Data2,Data3,Data4,Data5,Token1
I don't care about any of the other tokens. I can also live if the solution just dumps each csv filename into the Token1 field because, again, I can parse that in MySQL easily.
Please help me! I've spent over 10 hours on what should be a relatively easy problem.
Technologies available:
awk
windows batch
linux bash
powershell
perl
python
php
mysql-import
This is a server box so I won't be able to compile anything but if you give me a Java solution I will definitely try to run it on the box.
Believe it or not, it may be as simple as:
awk 'BEGIN{OFS = FS = ";"} {print $0, FILENAME}' *.csv > newfile.csv
If you want to change the field separator from semicolons to commas:
awk 'BEGIN{OFS = ","; FS = ";"} {$1 = $1; print $0, FILENAME}' *.csv > newfile.csv
To include only the first token:
awk 'BEGIN{OFS = ","; FS = ";"} {$1 = $1; split(FILENAME, a, "_"); print $0, a[1]}' *.csv > newfile.csv
You might want to try this quick & dirty Perl hack to convert the data:
#!/usr/bin/perl
use strict;
use warnings;
# Open input file
my $inputfile = shift or die("Usage: $0 <filename>
");
open F, $inputfile or die("Could not open input file ($!)
");
# Split filename into an array
my @tokens = split("_", $inputfile);
my $isFirstline = 1;
# Iterate each line in the file
foreach my $line (<F>) {
my $addition;
chomp($line); # Remove newline
# Add the complete filename to the line at first line
if ($isFirstline) {
$isFirstline = 0;
$addition = ",$inputfile";
} else { # Add first token for the rest of the lines
$addition = ",$tokens[0]";
}
# Split the data into @elements array
my @elements = split(";", $line);
# Join it using comma and add filename/token & a new line
print join(",", @elements) . $addition . "
";
}
close(F);
Using Text::CSV
:
#!/usr/bin/env perl
use strict;
use warnings;
use File::Find;
use Text::CSV;
my $semi_colon_csv = Text::CSV->new( { 'sep_char' => ';', } );
my $comma_csv = Text::CSV->new( {
'sep_char' => ',',
'eol' => "
",
} );
open my $fh_output, '>', 'output.csv' or die $!;
sub convert {
my $file_name = shift;
open my $fh_input, '<', $file_name or die $!;
# header
my $row = $semi_colon_csv->getline($fh_input);
$comma_csv->print( $fh_output, [ @$row, $file_name ] );
while ( $row = $semi_colon_csv->getline($fh_input) ) {
pop @$row unless $row->[-1]; # remove trailing semi-colon from input
my ($token) = ( $file_name =~ /^([^_]+)/ );
$comma_csv->print( $fh_output, [ @$row, $token ] );
}
}
sub wanted {
return unless -f;
convert($_);
}
my $path = 'csv'; # assuming that all your CSVs are in ./csv/
find( \&wanted, $path );
Header1,Header2,Header3,Header4,Header5,Token1_Token2_Token3.csv
Data1,Data2,Data3,Data4,Data5,Token1
Data1,Data2,Data3,Data4,Data5,Token1
Data1,Data2,Data3,Data4,Data5,Token1
Data1,Data2,Data3,Data4,Data5,Token1
Perl's DBI module can cope with CSV files (DBD::CSV module required) and MySQL. Just put all your csv files in the same dir, and query them like this:
use DBI;
my $DBH = DBI->connect ("dbi:CSV:", "", "", { f_dir => "$DATABASEDIR", f_ext => ".csv", csv_sep_char => ";",});
my $sth = $dbh->prepare ("SELECT * FROM Token1_Token2_Token3");
$sth->execute;
while (my $hr = $sth->fetchrow_hashref) {
[...]
}
$sth->finish ();
Yo can query csv files (including JOIN statements!) and insert data directly into MySQL.
This is one way to do it in PowerShell:
$res = 'result.csv'
'Header1,Header2,Header3,Header4,Header5,FileName' > $res
foreach ($file in dir *.csv)
{
if ($file -notmatch '(\w+)_\w+_\w+\.csv') { continue }
$csv = Import-Csv $file -Delimiter ';'
$csv | Foreach {"{0},{1},{2},{3},{4},{5}" -f `
$_.Header1,$_.Header2,$_.Header3,$_.Header4,$_.Header5,$matches[1]} >> $res
}
If the size of the files weren't so potentially large I would suggest going this route:
$csvAll = @()
foreach ($file in dir *.csv)
{
if ($file -notmatch '(\w+)_\w+_\w+\.csv') { continue }
$csv = Import-Csv $file -Delimiter ';'
$csv | Add-Member NoteProperty FileName $matches[1]
$csvAll += $csv
}
$csvAll | Export-Csv result.csv -NoTypeInformation
However, this holds the complete contents of all CSV files in memory until it is ready to export at the end. Not feasible unless you have 64-bit Windows with lots of memory. :-)