I need to preprocess some statistics data before writing it to the main database. My php-cli script retrieves data every 10 minutes and should store it somewhere. Every hour all saved data are preprocessed again and then writted to the main db.
I thought that sqlite should be nice solution if I would keep it in memory. I have not very big amounts of data (I am able to keep it in my RAM).
Actually, I am new to sqlite (previously I was working only with mySQL). I found here that I can use :memory:
instead of file name to work with memory only. But after client is disconnected database is destroyed, and if I will try to connect again to :memory:
from my script - it will be different (new, empty) database.
Is it right? If it is, how can I work with same database using different php script calls, if sqlite is stored in memory?
P.S. perhaps it is a solution to put sqlite file to some "magic" directory? (I am using linux)
First, linux is pretty smart about using a filesystem cache. Thus, reading data from a disk is often surprisingly fast, and you should measure if the performance gain is worth it.
If you want to go ahead, one method you might consider is using a ramdisk. Linux provides a way to create a filesystem in memory, and you can place your sqlite file in there.
# mkdir -i /mnt/ram
# mount -t ramfs -o size=20m ramfs /mnt/ram
# touch mydb.sql
# chown apache:apache mydb.sql
... or something similar.
SQLite database created in :memory:
is automatically destroyed when you disconnect from database handle or exit/terminate process that what using it.
If you want to have multiple PHP sessions to have access to database, you cannot use :memory:
. If you don't have big amounts of data, you should store your intermediate results in helper tables in some persistent database - MySQL or SQLite using real file on disk.