I have a framework processing a set of simple (single-table) update queries that's consistently failing on the second iteration. Within my framework, I build a prepared UPDATE query based on an associative array of key-value (col-data) pairs.
The problem I am encountering is as follows:
foreach ($data as $row) call update($row); // pseudo code
The framework builds the (prepared) update query for each row and returns a state/status on the operation.
The first query always executes and updates successfully but the second query to the method consistently returns a error:
Duplicate entry {value} for key {key}
Initially, I had a composite pkey for the table where cols B, C, D were the primary key. I tried removing the composite key for testing, but it generates errors on the auto-inc field which is marked as "unique" (not primary).
Tracing execution in the debugger, I see where that the prepare() is successful, but the second query is fail on the execute() generating the error above.
Also, during trace, I ensured that I encountered the statement:
$dblink->next_result();
before the second statement is built.
Mysql logs show the prepared statement (for the second failing statement which is identical to the first statement):
UPDATE /* /home/sn-m-beds/classes/MBEDS_mySQL.class.inc:(1982) */ config_cfg SET id_cfg = ?, ent_cfg = ?, hash_cfg = ?, key_cfg = ?, value_cfg = ?, modified_cfg = NOW() WHERE idHash_cfg = ?
and the contents of $result (from $result->execute()) are:
affected_rows = -1
insert_id = 0
num_rows = 0
param_count = 6
field_count = 0
errno = 1062
error = "Duplicate entry '31' for key 'id_cfg'"
error_list = {array} (as above)
sqlstate = "23000"
id = 9
So, obviously, deleting the composite-primary-key had no effect as far as mysql is concerned.
I've also tried generating (isolating) db link resources on each iteration but the same results are delivered.
Not sure what else I can look at... any suggestions or ideas would be appreciated greatly.
thanks!
--mike
UPDATE: 12/23/14:
I'm hesitant to show all the code for the bind/prepare -- there's a lot of code involved in generating and preparing the query. That and I know from the application's track-record, that queries generated using this method are always successful - until I've tried submitting successive queries - the framework has been in production for over a year now.
I don't believe the problem to be in the query being executed -- the where-clause discriminant qualifier is exec'ing against a unique key (idHash_cfg) -- previous to calling this core member function, the data tuple is evaluated and, if the idHash key exists, a call is made to a method that update()'s the db tuple where as no-idHash value generates a call to a method that builds an insert command. For both, the $data[] tuple is passed as one of the args to the method.
Some clarification - within the framework, each class instantiation has a protected member called $data which can be an array of tuples. The query-builder builds, prepares and exec's the query for-each row in the $data associative array.
The first query exec's successfully - all subsequent queries are being built syntactically correct (same code) but fail with the error previously described.
I'm thinking that this problem is one of the db-link resource for mysqli; something internal to MySQL, a reference or resource or w/e you want to call it, is preventing successful execution of the subsequent queries.
Here's the table create SQL:
*************************** 1. row ***************************
Table: config_cfg
Create Table: CREATE TABLE `config_cfg` (
`id_cfg` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'pkey',
`ent_cfg` char(3) NOT NULL COMMENT 'TLA for entity type',
`hash_cfg` char(32) NOT NULL COMMENT 'hash key for the entity',
`key_cfg` varchar(32) NOT NULL COMMENT 'configuration key value name',
`value_cfg` varchar(32) NOT NULL COMMENT 'value assigned to the key',
`created_cfg` datetime DEFAULT NULL,
`createdBy_cfg` char(32) DEFAULT NULL,
`accessed_cfg` datetime DEFAULT NULL,
`modified_cfg` datetime DEFAULT NULL,
`status_cfg` varchar(50) DEFAULT 'ACTIVE',
`sessionIP_cfg` char(15) DEFAULT NULL,
`idHash_cfg` char(32) DEFAULT NULL,
UNIQUE KEY `id_cfg` (`id_cfg`),
UNIQUE KEY `idHash_cfg` (`idHash_cfg`),
KEY `status_cfg` (`status_cfg`),
KEY `ent_cfg` (`ent_cfg`),
KEY `hash_cfg` (`hash_cfg`),
KEY `key_cfg` (`key_cfg`),
CONSTRAINT `config_cfg_ibfk_1` FOREIGN KEY (`status_cfg`) REFERENCES `status_sts` (`name_sts`)
) ENGINE=InnoDB AUTO_INCREMENT=39 DEFAULT CHARSET=latin1 COMMENT='container for entity configuration settings/values'
1 row in set (0.00 sec)
So, initially, I had a composite index as the primary key - a combination of the ent, hash and key columns. I removed that index y'day to eliminate the composite key complexity to help in diagnosis. As you can see from the dump, I'm just dealing with regular keys at this point.
Some code snippets where the query is being executed:
if ($result = $dbLink->prepare($this->strQuery)) {
if (call_user_func_array(array($result, 'bind_param'), $bp->refValues($bp->get()))) {
if ($result->execute()) {
// [ snip... ]
} else { // where error reports are coming from
snsError::set(ERROR_FATAL, 'query execution failure');
snsError::set(ERROR_FATAL, $this->strQuery);
snsError::set(ERROR_FATAL, $result->error);
$success = false;
$this->state = DATA_STATUS_MYSQL_ERROR;
}
id_cfg is NOT NULL AUTOINCREMENT
. You shouldn't be trying to set it at all in your INSERT statement. Leave it out and let mysql fill it in for you! Remove it from both the statement and the array of values you bind to the placeholders.
UPDATE /* /home/sn-m-beds/classes/MBEDS_mySQL.class.inc:(1982) */ config_cfg
SET id_cfg = ?, ent_cfg = ?, hash_cfg = ?, key_cfg = ?, value_cfg = ?, modified_cfg = NOW()
WHERE idHash_cfg = ?
tl;dr: problem was in code...
indexing issues were irrelevant...composite keys, id, etc.
instead of parsing the input tuple, I was fetching pkey info from the class member, which defaulted (in the code) to the 0th tuple if no index was passed.... which it wasn't b/c the code impacted was first-gen and never factored for n-tuples in the data member.
once I refactored the core code to handle n-tuples, all worked swimmingly.
thank you for all the comments and suggestions - that got me started looking in the right places.