What are the theoretical & practical draw backs of having an entire php application running inside a database transaction, to ensure that any unexpected error or exception caused in the script reverts every change done in the database, thus avoiding a state where due to an unfinished script some tables have been updated and some others not? This obviously doesn't seem to be nor would ever be recommended as good practice, but I would like to understand in detail why.
Database transactions ensure that multiple operations obey the ACID properties:
Atomic - this is the property to which you allude in your question, whereby either every operation in the transaction succeeds or else they all fail;
Consistent - this property ensures that the database state is valid at all times;
Isolated - this property ensures that concurrent transactions (i.e. from different connections) occur as though they were executed serially; and
Durable - once commited, changes are permanent.
In order to ensure the third property of Isolation, RDBMS systems like MySQL perform locking to ensure that, for example, one transaction does not write to a record whilst another is reading from it (when a record is locked, other transactions must wait for the lock to be released before they are able to proceed).
If your transactions are unnecessarily long, they will lead to excessive locking and excessive waiting. It could even lead to unnecessary deadlocks, whereby transactions are waiting on locks held by eachother and cannot proceed until at least one is rolled back.
For reasons of performance (and durability), you should therefore strive to keep transactions as small as is absolutely necessary to maintain consistency.