I'm running the code below over a set of 25,000 results. I need to optimize it because i'm hitting the memory limit.
$oldproducts = Oldproduct::model()->findAll(); /*(here i have 25,000 results)*/
foreach($oldproducts as $oldproduct) :
$criteria = new CDbCriteria;
$criteria->compare('`someid`', $oldproduct->someid);
$finds = Newproduct::model()->findAll($criteria);
if (empty($finds)) {
$new = new Newproduct;
$new->someid = $oldproduct->someid;
$new->save();
} else {
foreach($finds as $find) :
if ($find->price != $oldproduct->price) {
$find->attributes=array('price' => $oldproduct->price);
$find->save();
}
endforeach;
}
endforeach;
Code compares rows of two tables by someid. If it find coincidence it updates price column, if not creates a new record.
Use CDataProviderIterator
which:
... allows iteration over large data sets without holding the entire set in memory.
You first have to pass a CDataProvider
instance to it:
$dataProvider = new CActiveDataProvider("Oldproduct");
$iterator = new CDataProviderIterator($dataProvider);
foreach($iterator as $item) {
// do stuff
}
You could process the rows in chunks of ~5000 instead of getting all the rows in 1 go!
$cnt = 5000;
$offset = 0;
do {
$oldproducts = Oldproduct::model()->limit($cnt)->offset($offset)->findAll(); /*(here i have 25,000 results)*/
foreach($oldproducts as $oldproduct) {
// your code
}
$offset += $cnt;
} while($oldproducts >= $cnt);