FS#70591 - [gitlab] Ongoing asset issues..

Attached to Project: Community Packages
Opened by Matt Schultz (Korynkai) - Monday, 26 April 2021, 13:26 GMT
Last edited by Toolybird (Toolybird) - Thursday, 23 November 2023, 21:06 GMT
Task Type Support Request
Category Packages
Status Closed
Assigned To Anatol Pomozov (anatolik)
Caleb Maclennan (alerque)
Architecture All
Severity Low
Priority Normal
Reported Version
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 0
Private No

Details

Description:
Since we started using GitLab with the official Pacman packages a few years ago (not sure offhand if this started after a certain point or was always the case), we have not been able to properly start GitLab on initial installation or upgrade. It always returns HTTP error 503 "GitLab is not responding." It had always been necessary for us to redownload the assets from Git and recompile them in-situ, requiring us to modify permissions recursively on `/usr/share/webapps/gitlab` and requiring additional packages in order to perform the asset recompilation. In some cases it is also necessary to wipe the bundler gem directory and reissue 'bundle install'. This always messes with our upgrade workflow, requiring unnecessary intervention in order to get GitLab running properly. Often due to this, we would hold back GitLab until a later date when we could perform this intervention. Once this intervention was performed, GitLab would function normally. We held off on submitting any report assuming that this would inevitably be caught, and in order to experiment and ensure that this issue was not something that was specific to our configuration (however, we are still not 100% certain that is not the case, but this issue is being submitted due to the fact our confidence it is a reproducible issue has significantly increased).

We rediscovered this issue recently, when we had held back GitLab for a longer period of time due to the fact we are in the process of rebuilding our servers and creating our own Pacman repositories with modified or custom packages in order to expedite upgrades, modify package patches and build options to our needs, and provide necessary packages which may not be available in official Pacman repositories. As we were working on this (considering GitLab was one of the packages pulled in to our repository specifically in order to stage testing and ensure a seamless upgrade roll-out), we discovered GitLab continued this behavior even on our fresh installation.

We have poured over our GitLab configuration many times to determine if there are any relevant configuration directives pointing to asset source directories such as `/usr/share/webapps/gitlab/app/assets`, or any configuration directive regarding anything to do with such assets or objects to no avail. The PKGBUILD for GitLab removes these assets directories, quoting an omnibus script in the comments (which apparently excludes these directories in its build), even though this is not the omnibus installation but rather the "source" installation distributed as a package. However, removing these commands from the PKGBUILD apparently completely resolves this issue:

```
# Remove unneeded directories: node_modules is only needed during build
rm -r "${pkgdir}${_appdir}/node_modules"
# https://gitlab.com/gitlab-org/omnibus-gitlab/blob/194cf8f12e51c26980c09de6388bbd08409e1209/config/software/gitlab-rails.rb#L179
for dir in spec qa rubocop app/assets vendor/assets; do
rm -r "${pkgdir}${_appdir}/${dir}"
done
```

I will look further into which directories are causing this specifically in order to minimize the package footprint, but is there a specific reason for this issue to be occurring that is known but not well-documented? Does anybody know why this would be necessary if the package otherwise contains the precompiled assets and relevant bundled objects? I can certainly understand if source assets are being removed in preference to precompiled assets (similar to excluding a .c source file in a package that contains a .so or .a library file when the compiled object is what is necessary, not the original source), but it seems as if necessary assets and/or objects are also being removed in this process... We have modified our own PKGBUILD for our repository in order to remedy this issue, however this does not seem to be the way this package should behave regardless...

@anatolik? @felixonmars? Any ideas? Thank you in advance. I can and will generate and submit more information upon request, if need be.

Versions affected: (unknown version this originally appeared in; approximately when we started using official gitlab packages in Pacman, no record of what version this would have been) -> 13.10.3-2 (latest as of this report)

Logs: No service logs are relevant or useful, nor is anything sent to the systemd journal. I've previously attempted to find what is causing the 503 in every possibly relevant log file with no luck whatsoever (including all logs under /var/log/gitlab, /var/log/httpd and the systemd journal). I cannot even find that any error is being reported, or even so much as something akin to "X in Y cannot find Z -> skipping". As far as logs are concerned, GitLab seems to be functioning properly, even though a consistent 503 error is reported within the web browser. How we originally managed to determine it was a problem with assets was as we were performing a bundler command (I am not certain which commands specifically without further experimentation, but these were within the table at https://wiki.archlinux.org/index.php/GitLab#Useful_tips and may have included gitlab:check, gitlab:db_migrate, etc...), the command would throw stack-traced errors regarding certain assets being missing. I have no logs from these commands at this time, but will create an environment to regenerate these stack traces if need be.

Steps to reproduce:
* Install/Upgrade gitlab (as per the wiki) (note: we are using a containerized environment)
* Start gitlab services
* Start HTTP proxy server (we use Apache locally and pass through HaProxy to the public internet; noting modification of certain service files is required anyway for Apache as opposed to NGINX)
* Attempt to browse to the GitLab installation using a web browser -> 503 error

Additional notes regarding our setup:
Our old server installation had been using LXC to contain the GitLab installation. Our new server installation uses nspawn containers in the same manner.

I did not see any other issues similar to this or I would have followed up on such issues and would not be submitting this issue. I am not certain if this would actually be considered a "bug report" as it seems to be an unintended consequence of a feature incorporated into the PKGBUILD pulled from the Omnibus scripts, and I am not certain if this works as intended on bare-metal as we strictly use GitLab in a containerized installation (though why that would really matter is beyond me, though I have seen reports of outlying issues with containerized installations of various services), so I am submitting this as a "support request"... Though I suppose this could conceivably be considered an outlying bug in the PKGBUILD... I will however leave type and severity escalation in the hands of maintainers as I am confused as to the thought "if this really is a bug and is not configuration-specific or otherwise some sort of outlier, and considering there are likely many others utilizing these packages with no issues, why hasn't it been caught by now?", and acknowledging this issue likely demands further investigation...
This task depends upon

Closed by  Toolybird (Toolybird)
Thursday, 23 November 2023, 21:06 GMT
Reason for closing:  No response
Comment by Caleb Maclennan (alerque) - Saturday, 08 May 2021, 08:15 GMT
Just as a data point I was one of the maintainers of this in the AUR before it landed in [community] and have been using it from there since. I'm running on bare metal, not containers like you. GitLab upgrades are a never ending source of hair-pulling. Lately through the biggest issue I've had is that the matching ruby versions keep changing, and once you run `bundler` with the wrong ruby version it creates all sorts of conflicts. You can see my [current workaround for that issue here](https://aur.archlinux.org/cgit/aur.git/tree/gitlab-post.sh?h=gitlab-upgrade-hook).

Besides that, I have not seen issues with assets and needing to rebuild them in at least 2 years. As long as `db:migrate` gets run properly I've had pretty good luck lately. I did have similar issues with assets in something like the 8.x/9.x days, bot not for a long while.
Comment by Caleb Maclennan (alerque) - Tuesday, 23 May 2023, 21:15 GMT
Is this still a current issue for you?
Comment by Buggy McBugFace (bugbot) - Tuesday, 08 August 2023, 19:11 GMT
This is an automated comment as this bug is open for more then 2 years. Please reply if you still experience this bug otherwise this issue will be closed after 1 month.

Loading...