FS#18292 - [dcron] Should create ID Flag when not given to be compatible to his old behaviour
Attached to Project:
Community Packages
Opened by Michael Trunner (trunneml) - Thursday, 11 February 2010, 13:41 GMT
Last edited by Evangelos Foutras (foutrelis) - Thursday, 17 January 2013, 01:19 GMT
Opened by Michael Trunner (trunneml) - Thursday, 11 February 2010, 13:41 GMT
Last edited by Evangelos Foutras (foutrelis) - Thursday, 17 January 2013, 01:19 GMT
|
Details
Description:
Since version 4 dcron needs an ID Flag for @hourly, @daily, ... . Because of this nearly all of my cronjobs stopped working. I think dcron should create a hash value of the command and use this as ID, if the cronjob has no ID flag. This way it is compatible to his old behaviour. This is a problem for software that creates normal @hourly crontabs entries and don't add an ID flag, to be not dcron specific. (I know that ID=blabla, is handeld on other cronds as an env variable :-) ) Additional info: * dcron 4.4-1 * dcron homepage tolds me to report bugs here * Because most backup software use such crontabs entries I think the Severity is high. Steps to reproduce: * Add a cronjob with option @hourly and do not add ID=<something> in the command field * In syslog you will find some error messages for example like this: uucp.log:Feb 10 19:40:01 hostname crond[2153]: failed parsing crontab for user trunneml: writing timestamp requires job nice -n 19 /usr/bin/backintime --backup-job >/dev/null 2>&1 to be named |
This task depends upon
Closed by Evangelos Foutras (foutrelis)
Thursday, 17 January 2013, 01:19 GMT
Reason for closing: Upstream
Additional comments about closing: As per the last comment, please report bugs to the upstream Github repo.
Thursday, 17 January 2013, 01:19 GMT
Reason for closing: Upstream
Additional comments about closing: As per the last comment, please report bugs to the upstream Github repo.
If there were to be a default ID, I agree a hash of the command looks like the best way to go. I'll chew on whether there's some way to work that in, without architectural changes or adding dependencies. But I'm afraid it won't happen soon. (Of course I'll consider patches.)
In the meantime, you have to (1) get in the habit of giving an ID tag when you use @daily... fields, (2) don't use @daily fields, or (3) use a heavier-weight cron daemon.
I'll work this into the git version soon.
http://en.literateprograms.org/Hash_function_comparison_%28C,_sh%29#chunk%20def:hashfuncs
I think sdbm is good enough. Normally a user has not that count of crons that a collision can really happen :-)
But I have a better version in mind. May be to night I can create a better patch.
That fixes some hash-collision, when the same command is called on different lines.
And saves the hash in the line struct, for a later use.
Hope it is okay, and I will find it in the next release ;-)
But thanks for the patches, I'll work them in somehow. There's a big queue of changes waiting to be pushed to the public git repo.
At least such a collision with an old file makes no problem. The only thing that can happen, is that its first start is delayed for max one day. Thats all.
The other version will make problems every call, again and again and again.
And on new TeraByte harddirves some small files more or less makes no difference.
So the better version is to use the complete line. I started this version because a friend of mine on the university told me that my first version will make problems.
By the way you can add an cronjob that kills all files that are older then 2 weeks in the timestamp spool. :-)
2. I wasn't worried about the number of timestamp files, but about collisions with them. But as you observed, the downside of a timestamp collision is at worst a delay of one full period---which may be longer than a day, e.g., if it's a @weekly or @monthly job, but it's still at worst a delay of one period.
That's a minor pain, but you're right the collisions with crontab lines having the same command may be more frequent---and more painful, since crond will just refuse to handle any lines whose hash has already been used.
3. There already is a sample cronjob that kills old timestamp files. But since there are @monthly cronjobs (and longer, if anyone has use for them), running this every 2 weeks is too frequent. The example I think runs every 90 days. IIRC, this is not installed by the Arch package. (Maybe I even made it after the last Arch release, I forget now.)
So the clean up cronjob should run every hour, day or month and delete all timestamp files, that were much older then the longest supported period.
Please consider moving this package to unsupported, if it is the case that dcron is not supported anymore.