mysql简单查询优化

最近爬Boss直聘数据,因为没有使用scrapy这种框架,自建轮子写的整个框架,构造的url和数据存储都在阿里云国际的vps上(1G内存,CPU也是弱鸡),到了40多万时mysql查询就成了瓶颈。

研究了下,主要从数据库和查询语句两方面下手优化:

  1. innodb_buffer_pool_size = 256M (这个最重要,但是我内存很小,设置过高后容易把机器卡死,没办法)
  2. max_allowed_packet = 512M(这里对我没有意义,因为内存很小)
  3. 开启慢查询,针对性的优化了查询语句, 加索引 参考 https://www.mysqltutorial.org/mysql-index/mysql-create-index/ ,还有就是避免 like %s, NULL,所有查询都要加字段where避免select *:

slow_query_log = ON
slow_query_log_file = /var/lib/mysql/tmp_slow.log
long_query_time = 10

(boss) [[email protected] boss]$ sudo mysqldumpslow -t 10  /var/lib/mysql/tmp_slow.log

Reading mysql slow query log from /var/lib/mysql/tmp_slow.log
Count: 3  Time=20.94s (62s)  Lock=0.00s (0s)  Rows=590255.0 (1770765), root[root]@[47.56.250.177]
  SELECT `job_url` FROM jobs

Count: 68  Time=16.41s (1116s)  Lock=0.00s (0s)  Rows=589155.1 (40062550), root[root]@25hosts
  SELECT `job_url`, `city_code` FROM jobs

Count: 11  Time=16.38s (180s)  Lock=0.00s (0s)  Rows=110793.7 (1218731), root[root]@10hosts
  select `main_url` from start_url_cp LIMIT N, N

Died at /bin/mysqldumpslow line 167, <> chunk 82.

因为脚本框架的原因,需要查询已经存在的url防止重复爬取,所以数据量大时进行全表查询确实费时,加上网络延迟,各个爬虫host都在谷歌GCE上,这个效果目前能接受:

mysql> select count(job_url) from jobs;
+—————-+
| count(job_url) |
+—————-+
| 592501 |
+—————-+
1 row in set (2.35 sec)

之前这个查询非常慢了,效率指数级下降,几十秒甚至十几分钟。参考:

https://blog.csdn.net/u011277123/article/details/72627011

查询历史命令: cat  ~/.mysql_history

Leave a Reply