Pacman

Historical bug tracker for the Pacman package manager.

The pacman bug tracker has moved to gitlab:
https://gitlab.archlinux.org/pacman/pacman/-/issues

This tracker remains open for interaction with historical bugs during the transition period. Any new bugs reports will be closed without further action.
Tasklist

FS#23413 - Prevent large file download attacks

Attached to Project: Pacman
Opened by Allan McRae (Allan) - Thursday, 24 March 2011, 13:02 GMT
Last edited by Dave Reisner (falconindy) - Tuesday, 30 August 2011, 00:37 GMT
Task Type Bug Report
Category Backend/Core
Status Closed
Assigned To Dave Reisner (falconindy)
Architecture All
Severity Medium
Priority Normal
Reported Version git
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 0
Private No

Details

Summary and Info:

A possible avenue of attack is a malicious mirror admin replacing a package, signature, or snc db of unlimited size. We should try an prevent this.

Details pasted from: http://mailman.archlinux.org/pipermail/pacman-dev/2011-February/012410.html

(1) Unlimited size of signature/sync db
If pacman is confronted with signature or sync db files with unlimited
size, it just keeps downloading them until the partition containing
/var/lib/pacman is filled up. This could be exploited in
Denial-of-Service-Attacks, e.g. preventing the writing of log data or mails.
Solution: Limit the maximum size of the sync db and signature files to a
specified value (e.g. 20MB) when downloading them.

(2) Packages with unlimited size
This scenario is similar to the above, but this time the size of the
packages is known from the sync db.
Solution: only download packages up to the size specified in the sync
db; abort the download afterwards.


Note for (2) that we do not currently know the size of the package when installed with "pacman -U http://...";. Could we get the file size in much the same manor that we do for the sync dbs?
This task depends upon

Closed by  Dave Reisner (falconindy)
Tuesday, 30 August 2011, 00:37 GMT
Reason for closing:  Implemented
Comment by Dave Reisner (falconindy) - Monday, 28 March 2011, 01:08 GMT
Leaving a note here for myself: CURLOPT_MAXFILESIZE_LARGE will limit a download as a curl_off_t. If the content length is known at download time (as reported by the server) and exceeded, the transfer will fail with CURLE_FILESIZE_EXCEEDED.

If we don't know the size, we'll need to just abort the transfer ourselves. It seems we have an issue with proxies and some broken servers (kernel.org?) forcing us to download extra bytes, so we'll need to watch out for this and make sure that transfers are aborted with this error prematurely.
Comment by Dave Reisner (falconindy) - Monday, 15 August 2011, 15:17 GMT
This is mostly solved by 6dc71926f9b1, but ignores the possibility that the server won't give us a Content-Length header. Currently, a survey of all of Arch's mirrors shows that they will, without fail, provide this header for us. Of course, we should still add a max download value to the download_payload struct and abort properly from within the progress callback if this value is surpassed.
Comment by Dan McGee (toofishes) - Thursday, 25 August 2011, 21:19 GMT
I believe commit f7a3c4c8df95ad99e05d finished this up.

Loading...