从数组中删除重复的strlen项?

Is this the simplest way there is for getting rid of duplicate strlen items from an array? I do alot of programming that do similar tasks as this, thats why Im asking, if Im doing it too complicated, or if this is the easiest way.

$usedlength = array();
$no_duplicate_filesizes_in_here = array();
foreach ($files as $file) {
    foreach ($usedlength as $length) {
        if (strlen($file) == $length) continue 2;
    }
    $usedlength[] = strlen($file);
    $no_duplicate_filesizes_in_here[] = $file;
}
$files = $no_duplicate_filesizes_in_here;

There's not a lot hugely wrong with looping manually, though your example could be:

$files = array_intersect_key($files, array_unique(array_map('strlen', $files)));

PHP has a plethora of useful array functions available.

You can try this:

$no_duplicate_filesizes_in_here = array();
for ($i=count($files)-1;$i>=0;$i--){
  $no_duplicate_filesizes_in_here[strlen($files[$i])] = $file;
}
$files = array_values($no_duplicate_filesizes_in_here);
// if you don't care about the keys, don't bother with array_values()

If you're using PHP 5.3 or above, array_filter provides a nice syntax for doing this:

$nodupes = array_filter($files, function($file) use (&$lengths) {
    if (in_array(strlen($file), (array) $lengths)) {
        return false;
    }

    $lengths[] = strlen($file);
    return true;
});

Not as short as some other answers, but another aproach would be to use a key-based lookup:

$used = array();
$no_dupes = array();
foreach ($files as $file) {
  if ( !array_key_exists(($length = strlen($file)), $used) ) {
    $used[$length] = true;
    $no_dupes[] = $file;
  }
}

This would have the added bonus of not wasting time on storing duplicates (only to overwrite them later), however, whether this loop would be faster than some of PHP's built in array methods is probably down to a number of factors (number of duplicates, length of files array and so on) and would need to be tested. The above is what I would assume to be quicker in most cases, but I'm not a processor ;)

The above also means the first file found is the one that is kept, rather than the last found in some of the other approaches.