I just switched from local MySQL server (on EC2 micro instance) to RDS instance and I see dramatical speed degradation.
One SQL request to RDS equals to 40 requests to local DB.
Large number of SQL requests make script incredibly slow. There is a table, it shows time of website page generation:
Local MySQL server vs RDS
+----------------------------------+
| Requests | Local server* | RDS* |
+----------------------------------+
| 1 | 220 | 115 |
| 3 | 200 | 60 |
| 10 | 180 | 25 |
| 20 | 150 | 10 |
| 40 | 115 | 5 |
+----------------------------------+
* - Script run speed 1/n, for example 200 means 1/200 sec.
So basically, page with 40 very simple and fast SQL requests, which used to render for 0,008 sec with RDS renders for 0.2 sec, which is significant speed decrease (x23!).
Facts
us-west-1b
query_cache_type=1
, query_cache_size=16777216
, query_cache_limit=1048576
) and i can see it with SHOW GLOBAL VARIABLES LIKE 'query_cache_type'
select * from user where id = 100
, where id
is unique index
and work really fast if exec it using MySQL client (0.0005 sec)I understand that there is a network factor now, RDS it is remote server and there is some latency between client and server but i didn't expect so dramatical decrease, is it normal or there is something i can do with it?