From patchwork Fri Sep 17 15:27:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mathieu Othacehe X-Patchwork-Id: 33096 Return-Path: X-Original-To: patchwork@mira.cbaines.net Delivered-To: patchwork@mira.cbaines.net Received: by mira.cbaines.net (Postfix, from userid 113) id 545E227BBE1; Fri, 17 Sep 2021 16:28:09 +0100 (BST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on mira.cbaines.net X-Spam-Level: X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,URIBL_BLOCKED,WEIRD_PORT autolearn=unavailable autolearn_force=no version=3.4.2 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mira.cbaines.net (Postfix) with ESMTPS id A5C4027BBE3 for ; Fri, 17 Sep 2021 16:28:08 +0100 (BST) Received: from localhost ([::1]:49560 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mRFmZ-0003YK-PJ for patchwork@mira.cbaines.net; Fri, 17 Sep 2021 11:28:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51456) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mRFmU-0003Y8-Ke for guix-patches@gnu.org; Fri, 17 Sep 2021 11:28:02 -0400 Received: from debbugs.gnu.org ([209.51.188.43]:49385) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1mRFmU-0003OS-Dj for guix-patches@gnu.org; Fri, 17 Sep 2021 11:28:02 -0400 Received: from Debian-debbugs by debbugs.gnu.org with local (Exim 4.84_2) (envelope-from ) id 1mRFmU-0002iF-7x for guix-patches@gnu.org; Fri, 17 Sep 2021 11:28:02 -0400 X-Loop: help-debbugs@gnu.org Subject: [bug#50040] [PATCH 0/2] publish: Always render nar/narinfo during backing. Resent-From: Mathieu Othacehe Original-Sender: "Debbugs-submit" Resent-CC: guix-patches@gnu.org Resent-Date: Fri, 17 Sep 2021 15:28:02 +0000 Resent-Message-ID: Resent-Sender: help-debbugs@gnu.org X-GNU-PR-Message: followup 50040 X-GNU-PR-Package: guix-patches X-GNU-PR-Keywords: patch To: Ludovic =?utf-8?q?Court=C3=A8s?= Cc: 50040@debbugs.gnu.org Received: via spool by 50040-submit@debbugs.gnu.org id=B50040.163189243510372 (code B ref 50040); Fri, 17 Sep 2021 15:28:02 +0000 Received: (at 50040) by debbugs.gnu.org; 17 Sep 2021 15:27:15 +0000 Received: from localhost ([127.0.0.1]:60931 helo=debbugs.gnu.org) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1mRFlj-0002hD-1y for submit@debbugs.gnu.org; Fri, 17 Sep 2021 11:27:15 -0400 Received: from eggs.gnu.org ([209.51.188.92]:34210) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1mRFlh-0002h0-AE for 50040@debbugs.gnu.org; Fri, 17 Sep 2021 11:27:14 -0400 Received: from fencepost.gnu.org ([2001:470:142:3::e]:34450) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mRFlb-0002bQ-Um; Fri, 17 Sep 2021 11:27:07 -0400 Received: from [178.250.34.74] (port=40814 helo=meije) by fencepost.gnu.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mRFlb-0005r7-DV; Fri, 17 Sep 2021 11:27:07 -0400 From: Mathieu Othacehe References: <20210813102800.805-1-othacehe@gnu.org> <87sfz2j5ls.fsf@gnu.org> <87r1eaa7gu.fsf_-_@gnu.org> <87tuix7qsw.fsf_-_@gnu.org> Date: Fri, 17 Sep 2021 17:27:04 +0200 In-Reply-To: <87tuix7qsw.fsf_-_@gnu.org> ("Ludovic =?utf-8?q?Court=C3=A8s?= "'s message of "Mon, 06 Sep 2021 15:54:23 +0200") Message-ID: <87zgsbuss7.fsf@gnu.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux) MIME-Version: 1.0 X-BeenThere: debbugs-submit@debbugs.gnu.org X-Mailman-Version: 2.1.18 Precedence: list X-BeenThere: guix-patches@gnu.org List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: guix-patches-bounces+patchwork=mira.cbaines.net@gnu.org Sender: "Guix-patches" X-getmail-retrieved-from-mailbox: Patches Hey Ludo, > The ‘mumi worker’ process also seems to be doing a lot of I/O and using > quite a bit of CPU (presumably allocating a lot). Its ‘perf’ profile is > like this: > > 2.62% .mumi-real libguile-3.0.so.1.1.1 [.] scm_ilength > 1.20% .mumi-real libguile-3.0.so.1.1.1 [.] scm_getc > 1.12% .mumi-real libgc.so.1.4.3 [.] GC_add_to_black_list_normal.part.4 > 1.06% .mumi-real libgc.so.1.4.3 [.] GC_build_fl > 0.98% .mumi-real libgc.so.1.4.3 [.] GC_reclaim_clear > 0.97% .mumi-real libguile-3.0.so.1.1.1 [.] get_callee_vcode > 0.91% .mumi-real libgc.so.1.4.3 [.] GC_generic_malloc_many > 0.90% .mumi-real libguile-3.0.so.1.1.1 [.] peek_byte_or_eof > 0.78% .mumi-real [JIT] tid 62601 [.] 0x00007f886964804d > > Food for thought… Yep, interesting findings, thanks for sharing. I just discovered that the Nginx server is logging in /var/log/nginx/error.log the errors we are interested in: --8<---------------cut here---------------start------------->8--- 2021/09/17 14:13:37 [error] 129925#0: *3727055 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: ci.guix.gnu.org, request: "GET /3m1j4ddx11 prp6azw3rjdhljg5vchf1s.narinfo HTTP/1.1", upstream: "http://127.0.0.1:3000/3m1j4ddx11prp6azw3rjdhljg5vchf1s.narinfo", host: "ci.guix.gnu.org" 2021/09/17 14:13:37 [error] 129925#0: *3727020 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: ci.guix.gnu.org, request: "GET /pdhr7fgql3 qm1x24yxkj4c2f6s7vffi5.narinfo HTTP/1.1", upstream: "http://127.0.0.1:3000/pdhr7fgql3qm1x24yxkj4c2f6s7vffi5.narinfo", host: "ci.guix.gnu.org" 2021/09/17 14:13:37 [error] 129925#0: *3726717 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: ci.guix.gnu.org, request: "GET /19gv6sq qcmn8q020bpm9jyl6n9bjpg05.narinfo HTTP/1.1", upstream: "http://127.0.0.1:3000/19gv6sqqcmn8q020bpm9jyl6n9bjpg05.narinfo", host: "ci.guix.gnu.org" 2021/09/17 14:13:37 [error] 129925#0: *3725361 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 141.80.167.169, server: ci.guix.gnu.org, request: "GET /y8dp69gk y4dn5vnn7wnx04j20q0yrdja.narinfo HTTP/1.1", upstream: "http://127.0.0.1:3000/y8dp69gky4dn5vnn7wnx04j20q0yrdja.narinfo", host: "141.80.167.131" --8<---------------cut here---------------end--------------->8--- Despite the following diff, extending the timeout delays to 10 seconds, we still have a lot of those errors. Some happening on the build farm (141.80.167.X), some directly on user machines. --8<---------------cut here---------------start------------->8--- Thanks, Mathieu diff --git a/hydra/nginx/berlin.scm b/hydra/nginx/berlin.scm index 44ff28e..7a085e5 100644 --- a/hydra/nginx/berlin.scm +++ b/hydra/nginx/berlin.scm @@ -97,9 +97,9 @@ ;; Do not tolerate slowness of hydra.gnu.org when fetching ;; narinfos: better return 504 quickly than wait forever. - "proxy_connect_timeout 2s;" - "proxy_read_timeout 2s;" - "proxy_send_timeout 2s;" + "proxy_connect_timeout 10s;" + "proxy_read_timeout 10s;" + "proxy_send_timeout 10s;" --8<---------------cut here---------------end--------------->8--- I'll see if I can reproduce the I/O pressure on demand, to be able to have more accurate strace/perf investigations.