将数组的值与字符串PHP匹配

I am working on a small project and I need some help. I have a CSV file with 150,000 rows (each row has 10 cols of data). I am using the fscvread to read the file and during the loop I want to match one of the columns (call it stringx) of each row against an array of 10,000 words. If any of the 10,000 words exist in stringx, it is removed using preg_replace.

Now all of this is fine. I have everything working fine, but the problem is, its too slow.

I have tried 2 methods to match the array. 1) I convert stringx to an array using explode(" ", $stringx) and then use the array_diff($array_stringx, $array_10000); 2) use foreach on $array_10000 and preg_replace on $stringx

Method 1 takes about 60 secs to go through 200 rows of data and method 2 can loop 500 rows in 60 secs.

Is there a better way to do this?

Once again, I am looking for an efficient way to (basically) array_diff an array of 10,000 words against 150,000 strings one at a time.

Help is much appreciated.

The following is just an alternative. It may or may not fulfil your requirements.

It performs 84 ops/second with 10k words dictionary and 15k string on my laptop.

Downside is that it does not remove the spaces around the words.

$wordlist is just rows with one word each, could be a file.

$dict = array_flip(preg_split('/
/',$wordlist));

function filter($str,$dict) {
  $words = preg_split('/\s/',$str);
  sort($words);
  $words = array_unique($words);

  foreach ($words as $word) {
    if (key_exists($word,$dict)) {
        $removeWords[] = '/\b' . $word . '\b/';
    }
  }
  return preg_replace($removeWords, '', $str);
}

Another example that performs a bit faster (107ops/s with 15kb string and 10k words dictionary)

function filter2($str,$dict) {
  $words = preg_split('/\b/',$str);
  foreach ($words as $k => $word) {
    if (key_exists($word,$dict)) {
        unset($words[$k]);
    }
  }
  return implode('', $words);
}

Is your 10000 word array sorted? If not, try sorting it first.

Edit: ok since it's sorted i'm guessing maybe PHP's array_search doesn't do a binary search, so i'd look for a binary search implementation and use that. If indeed it's just a linear search then you'll get an order of magnitude speed increase that way.

PHP isn't the language for speed, but I guess you know that. I have to do something like that in a project I'm writing and I'm writing a file with PHP, then using a Matlab standalone to read that file, process it and output it an other one.

You could do the same and write a small program in C that does the same as array_diff(). I think there will be a huge difference, although I haven't done any testing.

How about not exploding stringx, and doing an stripos() for each word in $array_10000?

like this:

foreach ($array_10000 as $word)
{
    if (stripos($stringx, $word) !== false)
    {
        // do your stuff
    }
}

I haven't tested this, but it just occured to me:

You could try pre-parsing the file with a regex to obtain the 150,000 words to filter (based on the column separator) and then you could do the text replacement, picking the best function based on this article I googled.

I hope it helps! Cheers!

You can just do the foreach and also the implode.

$words = array("one","two", "three");
$number = 0;
foreach ($words as $false_array)
{
$number += 1;
$array[$number] = $false_array;
echo "Added ". $false_array . ". ";
}
foreach ($words as $false_array)
{
echo "Array Contains " . $false_array . ". ";
}

If you were to execute this in php, you would get:

Added one. Added two. Added three. Array Contains one. Array Contains two. Array Contains three.