FS#23501 - Finer grained and more correct locking

Attached to Project: Pacman
Opened by Dan McGee (toofishes) - Tuesday, 29 March 2011, 16:25 GMT
Last edited by Xavier (shining) - Tuesday, 29 March 2011, 20:20 GMT
Task Type Feature Request
Category Backend/Core
Status Unconfirmed
Assigned To No-one
Architecture All
Severity Low
Priority Normal
Reported Version 3.5.1
Due in Version Undecided
Due Date Undecided
Percent Complete 0%
Votes 13
Private No

Details

We could be better served by finer-grained locking on our various writable components. It would also help a lot with some other pending issues (cache locking,  FS#8226 ). This also deals with recent comments on FS#9424, db.lck pid storage.

Proposing the following:
* Various locks. sync database lock (e.g. /var/lib/pacman/sync/db.lck), local database lock which would also cover writing to the whole filesystem (/var/lib/pacman/local/db.lck), cache lock(s) (present in the root of each configured pacman cache).
* If we can, flock()-like semantics with exclusive vs. shared locks.
* Methods to ensure we don't deadlock and acquire locks in conflicting orders, or can resolve this if it happens.
* Reducing time under locks to a minimum. For an -Syu we would:
- acquire sync db write lock during '-y' portion, then drop
- acquire cache write lock during package/signature download, then drop
- acquire local db write lock during actual upgrade process, then drop
- perhaps also getting read locks during this time as well as necessary

Note that cache locks and flock() might not play nice- many people could potentially use a shared NFS package cache.
This task depends upon

Loading...