2008年11月27日星期四

Re: [fw-db] Parsing through large result sets

What monk.e.boy has written in the same thread is actually one of the things
I need to do.
I have a large table with securities and another even large table with
different types of prices related to securities.

Making a join on those two tables results in one line with 15-20 columns:
first column is the secuirty id, second column is a date and the rest of the
columns are a value for each of the different price types. At the moments
this gives me about 20.000 rows.

Trying to dump all that data to a i.e. csv file will trigger the
out-of-memory error.
I would've loved to be able to just do a:
file_put_contents($filename, $db->fetchAll($sql));
... but the fetchAll() understandably triggers the out-of-memory.

Hence the while ($row = $db->fetch())-loop as I have been used to, but which
sadly also triggers an out-of-memory (have in mind that I have memory_limit
set to 512MB) if I use ZF.

I hope you can follow me :)


keith Pope-4 wrote:
>
> Could you describe what you are trying to do, how many rows etc. I
> wouldn't mind to see if there is a workaround.

--
View this message in context: http://www.nabble.com/Parsing-through-large-result-sets-tp20264357p20718171.html
Sent from the Zend DB mailing list archive at Nabble.com.

没有评论: