>
> I can always catch and handle the exception on every save(), but since I
> know there's going to be duplicate data on subsequent fetches, that's
> not an exception and shouldn't be treated as such.
>
I would catch the exception generated by a duplicate key violation in the
database.
I see your point about not using exceptions for frequent scenarios, but
that's a guideline not an unbreakable law. In this case, the exception is
the best way of getting information from the RDBMS that a duplicate exists.
Using the method of searching for a matching row and insert only if none is
found generally results in a race condition. Perhaps not in your case since
I guess you're pretty sure you can guarantee only one client is inserting
scraped page content. But more generally, this technique has a risk that
some other concurrent client inserts the duplicate row in the moment between
your search and your insert. So you should write robust code that handles
the exception anyway.
By doing the insert and caching the exception (even if it's likely to be the
more common result), you don't have to do the search for a matching row at
all. It's done for you implicitly by the unique key check the RDBMS does
before inserting the row.
Regards,
Bill Karwin
--
View this message in context: http://zend-framework-community.634137.n4.nabble.com/Db-Table-createRow-duplicate-keys-tp2312906p2313836.html
Sent from the Zend DB mailing list archive at Nabble.com.
没有评论:
发表评论