在php页面中优化大数据

I'm trying to speed up the time it takes to query 200K rows of data and show it in a web page, it normally takes around 20-30 seconds, but currently I've gotten it down to around 7-10 seconds. I still need it to go around 3-7 seconds at most.

In my query, I'm selecting all the rows in a single table and placing around 5 columns into the PHP script. The query goes like this:

SELECT * FROM table where company = ? and division = ?;

I just add the parameters from my session. Then I send the query and the parameters in a .inc file and the inc runs the query. But it also runs a separate query for pagination of data. then it would compare the 2 queries to check for any irregularities between the two. Then it will place the data in a new window.

Are there any known algorithms or techniques I could use to speed up the process? I've already removed any garbage code and sped up some functions but it's still slow.

P.S.: I also just handled the system around 2 days ago so I'm still familiarizing the structure.

First, make sure there's an index on that table. If your queries include both company and division in your where clause, a compound index across the fields would work well.

Second, don't run a second query for pagination! Run the first query with a limit clause, cache what you need to, and then paginate from there.

  1. Use persistent connections to the database to avoid connection overhead.

  2. Check all tables have PRIMARY KEYs on columns with high cardinality (many rows match the key value). Well,gender column has low cardinality (selectivity), unique user id column has high one and is a good candidate to become a primary key.

  3. All references between different tables should usually be done with indices (which also means they must have identical data types so that joins based on the corresponding columns will be faster). Also check that fields that you often need to search in (appear frequently in WHERE, ORDER BY or GROUP BY clauses) have indices, but don’t add too many: the worst thing you can do is to add an index on every column of a table (I haven’t seen a table with more than 5 indices for a table, even 20-30 columns big). If you never refer to a column in comparisons, there’s no need to index it.

  4. Using simpler permissions when you issue GRANT statements enables MySQL to reduce permission-checking overhead when clients execute statements.

  5. Use less RAM per row by declaring columns only as large as they need to be to hold the values stored in them.

  6. Use leftmost index prefix — in MySQL you can define index on several columns so that left part of that index can be used a separate one so that you need less indices.

  7. When your index consists of many columns, why not to create a hash column which is short, reasonably unique, and indexed? Then your query will look like: SELECT * FROM table WHERE hash_column = MD5( CONCAT(col1, col2) ) AND col1='aaa' AND col2='bbb';

  8. Consider running ANALYZE TABLE (or myisamchk --analyze from command line) on a table after it has been loaded with data to help MySQL better optimize queries.

  9. Use CHAR type when possible (instead of VARCHAR, BLOB or TEXT) — when values of a column have constant length: MD5-hash (32 symbols), ICAO or IATA airport code (4 and 3 symbols), BIC bank code (3 symbols), etc. Data in CHAR columns can be found faster rather than in variable length data types columns.

  10. Don’t split a table if you just have too many columns. In accessing a row, the biggest performance hit is the disk seek needed to find the first byte of the row.

for more please visit:

http://www.ajaxline.com/32-tips-to-speed-up-your-mysql-queries