<div dir="ltr">I misread, disregard the last message.<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 27 May 2014 12:41, Matthew Bell <span dir="ltr"><<a href="mailto:matthewrobertbell@gmail.com" target="_blank">matthewrobertbell@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Also, you should use a pool when using gevent, to limit concurrency.<br></div><div class="gmail_extra"><div>
<div class="h5"><br><br><div class="gmail_quote">On 27 May 2014 12:35, Matthew Bell <span dir="ltr"><<a href="mailto:matthewrobertbell@gmail.com" target="_blank">matthewrobertbell@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>In my experience, this is likely to be a gevent + requests + lxml leak.<br><br></div>Here's the easy way to get around it: remove grequests, setup rq - <a href="http://python-rq.org/" target="_blank">http://python-rq.org/</a> - (very easy). Create a simple function that takes an ID, then does the scraping. do a loop over the <br>
<pre><div><span>ids</span> <span>=</span> <span>xrange</span><span>(</span><span>12210</span><span>,</span> <span>150000</span><span>) and schedule a job for each ID, run as many workers as you wish. It may use a little more memory, but it won't leak, due to rq cleaning up properly (forking for each job)</span></div>
</pre><div class="gmail_extra">Pony / mysql will have no problems with you doing it this way. It's sensible to run the rq workers under supervisor, with a config like:<br><br>[program:rq]<br>directory=/app_folder/<br>
command=rqworker<br>process_name=%(process_num)02d<br>numprocs=6<br>autostart=true<br>autorestart=true<br>stopsignal=TERM<br><br></div><div class="gmail_extra">You can easily scale it to multiple machines if you wish, just point the workers to the same redis and database :)<br>
</div><div class="gmail_extra"><br><div class="gmail_quote"><div><div>On 27 May 2014 10:37, Роман Рубан <span dir="ltr"><<a href="mailto:ryr1986@gmail.com" target="_blank">ryr1986@gmail.com</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>
<div dir="ltr"><div>hello,</div><div><br></div><a href="https://gist.github.com/ryr/6a2d8997057a70be7eb3" target="_blank">https://gist.github.com/ryr/6a2d8997057a70be7eb3</a><br><div>it's my working site crawler.</div>
<div>it consumes 20gb+ of memory</div>
<div><br></div><div>how to optimize the cache usage?</div></div>
<br></div></div>_______________________________________________<br>
ponyorm-list mailing list<br>
<a href="mailto:ponyorm-list@ponyorm.org" target="_blank">ponyorm-list@ponyorm.org</a><br>
<a href="/ponyorm-list" target="_blank">/ponyorm-list</a><br>
<br></blockquote></div><span><font color="#888888"><br><br clear="all"><br>-- <br>Regards,<br><br>Matthew Bell<br>
</font></span></div></div>
</blockquote></div><br><br clear="all"><br></div></div><span class="HOEnZb"><font color="#888888">-- <br>Regards,<br><br>Matthew Bell<br>
</font></span></div>
</blockquote></div><br><br clear="all"><br>-- <br>Regards,<br><br>Matthew Bell<br>
</div>