Nothing Special   »   [go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

max_execution_time reached too early #12814

Closed
Muffinman opened this issue Nov 28, 2023 · 74 comments · May be fixed by #13468
Closed

max_execution_time reached too early #12814

Muffinman opened this issue Nov 28, 2023 · 74 comments · May be fixed by #13468

Comments

@Muffinman
Copy link
Muffinman commented Nov 28, 2023

Description

Hi, I'm looking for some guidance in tracking down a very strange error with max_execution_time setting not being honoured correctly.

We have an incoming JSON POST request to a Laravel backend, which does some processing and stores some data.

The request is a few KB of tree data, and the controller calls a rebuildTree() method as below. Internally this makes a lot of calls to the MySQL DB and Redis DB.

Category::scoped(['site_key' => config('cms_scope')])
    ->rebuildTree($request->data);

The problem I'm facing is that this fails every time after 1 second with this error:

Maximum execution time of 30 seconds exceeded

I have added some debugging to the exception handler to try to work out why it is doing this:

var_dump(time() - $_SERVER['REQUEST_TIME']);
var_dump(getrusage());
var_dump(ini_get('max_execution_time'));
var_dump(error_get_last());
int(1) // Request had only been processing for ~1s before exception happened

array(17) {
  ["ru_oublock"]=>  int(0)
  ["ru_inblock"]=>  int(0)
  ["ru_msgsnd"]=>  int(43432)
  ["ru_msgrcv"]=>  int(51349)
  ["ru_maxrss"]=>  int(71073792)
  ["ru_ixrss"]=>  int(0)
  ["ru_idrss"]=>  int(0)
  ["ru_minflt"]=>  int(17193)
  ["ru_majflt"]=>  int(17)
  ["ru_nsignals"]=>  int(15)
  ["ru_nvcsw"]=>  int(7368)
  ["ru_nivcsw"]=>  int(104279)
  ["ru_nswap"]=>  int(0)
  ["ru_utime.tv_usec"]=>  int(212295)
  ["ru_utime.tv_sec"]=>  int(8)
  ["ru_stime.tv_usec"]=>  int(564183)
  ["ru_stime.tv_sec"]=>  int(1)
}

string(2) "30"

array(4) {
  ["type"]=>  int(1)
  ["message"]=>  string(45) "Maximum execution time of 30 seconds exceeded"
  ["file"]=>  string(112) "/path/to/site/vendor/laravel/framework/src/Illuminate/Redis/Connections/PhpRedisConnection.php"
  ["line"]=>  int(405)
}

Now for the next strange part, if I set the max_execution_time to some other far larger value, it works!

// set_time_limit(30); // Didn't work, same error
// set_time_limit(60); // Didn't work, same error
// set_time_limit(120); // Didn't work, same error
set_time_limit(300); // Works. Request completed in around 3.5s!
Category::scoped(['site_key' => config('cms_scope')])
    ->rebuildTree($request->data);

Screenshot 2023-11-28 at 13 41 38

PHP Version

PHP 8.2.13

Operating System

MacOS 14.1.1

@iluuu1994
Copy link
Member

Hi @Muffinman. Can you share a bit more about your setup? What web server + SAPI are you using? Are you using Octane or something similar by any chance?

@Muffinman
Copy link
Author

Hi @iluuu1994 ,

I'm running the homebrew compiled php-fpm under nginx.

No octane in use, I only have php-redis and imagick extensions installed via pecl, everything else should be out of the box.

@iluuu1994
Copy link
Member

That's odd. macOS should use (AFAIK) setitimer(ITIMER_REAL, ...) which doesn't even include IO time. So if anything, your request should allow to run for longer than the configured max_execution_time. Is this issue restricted to macOS or does it also occur on other operating systems?

@Muffinman
Copy link
Author

I've only seen it on MacOS.

One other thing I've noticed is that if I replace the call to Category::rebuildTree() with a simple sleep(5), then the timeout problem does not happen, even though the request takes longer.

There must be something deep in that code which is causing PHP either to throw the wrong exception, or otherwise somehow miscount it's execution time.

@iluuu1994
Copy link
Member

Unfortunately I don't have a (working) macOS machine to test this. Some minimal reproducer would be great, although I understand this might be tricky. It's also hard to rule out an error in php-redis itself.

@Muffinman
Copy link
Author

Well I think I've managed to rule out php-redis, if I switch all the Laravel queue/cache drivers away from Redis the max_execution_time exception still happens, but instead happens in some Guzzle code.

Will keep experimenting to see if I can narrow it down further.

@jackwh
Copy link
jackwh commented Dec 4, 2023

@Muffinman I've got exactly the same issue, relieved to see it's not just me! Have you been able to find any clues?

In my case this affects both PHP 8.2.13, and a fresh clean install of PHP 8.3.0. Both running on macOS 14.1.2 using Valet (php-fpm + nginx). However, I was tipped off to the issue after a random page on my production server (running Ubuntu 18, PHP 8.2 and nginx) started throwing HTTP 500s and timing out. So this possibly also affects non-Mac environments as well.

Unfortunately it's not easy to debug further on production at the moment, as I've since had to make a short-term workaround to get the page running again. If any more clues come up I can try some specific tests though.

Other things I've noticed:

  • No consistency in terms of which line throws the error, it just seems to get tangled up in various framework code each time
  • If I add ini_set('max_execution_time', -1) the page loads after 2 or 3 seconds.
  • If I change it to ini_set('max_execution_time', 30) (or 20, or 10, etc.) it still throws the error almost immediately. The error message also updates to show the configured timeout (e.g "Maximum execution time of 20 seconds"), so I know it's taking effect.
  • I've searched the entire codebase (including vendor) for any scripts which might be overriding max_execution_time, set_time_limit(...), etc. but there are no occurrences

@iluuu1994
Copy link
Member

Can any of you provide a reproducible codebase? If so, you can send it to me privately over email. If possible without dependencies (databases and whatnot).

@Muffinman
Copy link
Author

No I didn't manage to track it down unfortunately. In my case I was able to mitigate it by pausing the Model events in Laravel.

I think it should be possible to create minimal a reproduction repo.

@jackwh
Copy link
jackwh commented Dec 4, 2023

@iluuu1994 I'm trying to isolate a reproducible example at the moment, I'll let you know if I manage to get one figured out but it's not straightforward.

In my case the error is limited to one specific page, and only occurs for users with lots of associated data. Just seeding a bunch of fake records doesn't seem to cause the problem. However our app does have quite complex nested relationships in user data, so this could be hiding something else.

Some more observations from further debugging...

  • I've tried adding microtime() logging at the start of index.php, and in register_shutdown_function() to catch it at the end. The difference always adds up to the "real" duration (~1-2 seconds).
  • Often the timeout is thrown within Redis-related code, but I can still reproduce it with php-redis disabled, Redis shut down, and cache drivers changed to array.
  • I'm also seeing it thrown a lot by PDO-related framework scripts. Perhaps something to do with socket connections affecting PHP's internal timers?

@jackwh
Copy link
jackwh commented Dec 6, 2023

@iluuu1994 I've got a reproducible example working 👀 It's very simple to set up:


  1. Run these three commands:
composer create-project laravel/laravel timeout
cd timeout
php artisan serve
  1. Open http://127.0.0.1:8000 in your browser, you should see the Laravel welcome page

  2. Open ./app/Providers/AppServiceProvider.php in your editor

  3. Add an infinite loop to the boot() method:

    /**
     * Bootstrap any application services.
     */
    public function boot(): void
    {
        while (true) {
            // ... do nothing
        }
    }
  1. Start a timer, and reload the webpage.

Each time I tried this the page would load for somewhere between 20 to 40 seconds, before throwing the error Maximum execution time of 60 seconds exceeded. I assume 60 seconds is the internal server's default, whilst it's 30 seconds for Valet. The php artisan serve command shows durations for each request as they're returned, so you can confirm the timings in the console.

Given the mistimings still occur, despite taking longer than the 1-2 seconds we were noticing with established codebases, perhaps the root cause is compounding throughout the request lifecycle?

This was using a clean install of PHP 8.3 via Homebrew on macOS Sonoma 14.1.2, although I noticed a similar issue in production on Ubuntu 18.04.4.

Let me know if I can provide any more info. Thanks 🙏

@iluuu1994
Copy link
Member

@jackwh Thank you for the reproducer! Just to clarify: You are not using artisan serve or php -S in production, correct? But this is how you are working / running into this issue locally?

I noticed something peculiar. On PHP 8.3(-dev), I get the expected 30 seconds of timeout. 30s is the default value in PHP and has been since basically forever. On PHP 8.2(.13), I get 60 seconds of timeout. However. checking for var_dump(ini_get('max_execution_time')); returns 30. Maybe the internal server messes up the default timeout in some way. I cannot see any hard-coded values in Laravel or the internal server.

I can reproduce this with a minimal script:

<?php
// var_dump(ini_get('max_execution_time'));
// Reports 30
while (true) {}
// Fails after 60

Changing max_execution_time manually fixes the issue for me for both versions. I will have a closer look at this.

Since PHP 8.3 actually behaves correctly in my case I'm not sure if this is related. I get consistent 30/60s timeouts, there's no variability. I also tried with --enable-zend-max-execution-timers but that doesn't reproduce it either. I also checked https://github.com/Homebrew/homebrew-core/blob/master/Formula/p/php.rb, but it doesn't seem to do anything special. It's possible that this is a macOS only issue, in which case it would be nice if somebody else could try reproducing this.

@iluuu1994
Copy link
Member
iluuu1994 commented Dec 6, 2023

This was indeed a false lead. The php-cli-server has a bug where max_execution_time is never set when a router script is used, because php_execute_script isn't called. This means that the max_input_time timer is never canceled, which was configured to 60s by default on my machine. Additionally, there's a socket timeout of 60s for the network socket, which is 60s by default and cannot be configured.

@jackwh
Copy link
jackwh commented Dec 6, 2023

@iluuu1994 In production we're running nginx with php-fpm, but this could possibly be a red herring. I recall the production timeouts happening "faster than usual", but admittedly I was in a hurry to get a workaround in place, and not paying close attention to how long it took.

We've not made any recent code changes, this really did seem to just spring up out of the blue. The fact it's the same page causing timeouts on production and my Mac makes me assume they're linked, but not ruling out something else.

Having said that, my test case just used php artisan serve for convenience. I normally run my local sites through nginx+fpm on the Mac too (via Laravel Valet), it seems both are affected.

@iluuu1994
Copy link
Member

@devnexen You have access to a macOS machine, right? Could you see if you can reproduce the issue at hand with the instructions from #12814 (comment)? I had no luck on Linux. Additionally, it would be great if @jackwh could try this reproducer on Linux, e.g. using Laravel Sail.

@devnexen
Copy link
Member
devnexen commented Dec 6, 2023

I can't reproduce the issue, I get the Maximum execution time of 60 seconds exceeded message but after more than 1mn. Tried couple of times.

@iluuu1994
Copy link
Member

I fixed the issue I found yesterday (#12886), but I doubt it will help in this case. @jackwh Could you try reproducing this on Linux/Docker? That would be very helpful in narrowing down this issue.

@Muffinman
Copy link
Author
Muffinman commented Dec 8, 2023

I have also had this on another site, this time a Drupal 10 site - so still quite Symfony heavy but not specific to Laravel.

Fatal error: Maximum execution time of 30 seconds exceeded in /path/to/site/web/core/lib/Drupal/Core/Database/StatementWrapperIterator.php on line 110

Screenshot 2023-12-08 at 15 33 03

EDIT: For this site I am able to switch to PHP 8.3.0 to test under that, and I can confirm that I also have the same issue there.

Will test some other Drupal sites, should be easy to reproduce if it happens on fresh Drupal installation.

@Muffinman
Copy link
Author
Muffinman commented Dec 8, 2023

OK so I went back to basics:

<?php

while (true) {
  //
}
Fatal error: Maximum execution time of 30 seconds exceeded in /Users/matt/Sites/bug-report/public/index.php on line 3

Screenshot 2023-12-08 at 16 03 08

Seems to definitely be shorter the more work the script does, I will try putting some PDO queries in the loop and see if that affects the response time.

@Muffinman
Copy link
Author

OK so it's definitely socket related.

<?php

while (true) {
    DB::connection()->select('SHOW TABLES');
}

// Timeout in 4.7s

<?php

while (true) {
    DB::connection()->select('SHOW TABLES');
    usleep(100);
}

// Timeout in 7.2s

<?php

while (true) {
    DB::connection()->select('SHOW TABLES');
    usleep(500);
}

// Timeout in 13.6s

<?php

while (true) {
    openssl_random_pseudo_bytes(32);
}

// Timeout in 19.2s

while (true) {
    $fh = fsockopen('localhost', 80);
    fclose($fh);
}

// Timeout in 0.3s!!

@iluuu1994
Copy link
Member
iluuu1994 commented Dec 9, 2023

@Muffinman Thank you! I cannot reproduce any of those on Linux. With what method (built-in server, artisan serve, etc) are you running these tests? Can you also reproduce this if you just use php -S ... test_script.php? Can you try this in Docker? I can't find anything interesting on setitimer on Darwin in the documentation.

@Muffinman
Copy link
Author

These were all run under php-fpm and I was able to reproduce all the way back to php7.4, I didn't try anything earlier.

I will re-do these tests with PHP built in server and an FPM docker container and report back.

@Muffinman
Copy link
Author

I can confirm I get the same timeout behaviour PHP's built-in webserver on native OS.

Running under docker (php:8.3-apache) seems to work as expected, with correct timeouts honoured.

@windaishi
Copy link
Contributor
windaishi commented Mar 1, 2024

@dunglas Just saw your PR also resolves the issue. Will this be released for 8.2 and 8.3 when merged, because your PR is against master? If so, I think my PR can be closed.

@dunglas
Copy link
Contributor
dunglas commented Mar 1, 2024

@windaishi I don't know what's the policy regarding this. Anyway, this patch currently has issues with the curl extension and we're still investigating if relying on libdispatch is the best option. cc @arnaud-lb @devnexen

@iluuu1994
Copy link
Member
iluuu1994 commented Mar 1, 2024

I think releasing this on macOS right away is risky. I think the safer bet might be to switch to ITIMER_REAL on macOS exclusively on older branches for now, and use the new mechanism in master. It might be good to discuss this on the mailing list first.

@arnaud-lb
Copy link
Member
arnaud-lb commented Mar 4, 2024

I agree. Before discussing on the list, could everyone confirm that #13567 fixes the issue? Also, do everyone confirm that it's a MacOS 14 + Apple Silicon issue?

@Pixelshaped
Copy link

On a dev machine, setting max_execution_time to 0 doesn't solve (as in "hide") the issue.

I'm getting Maximum execution time of 0 seconds exceeded messages, which is weird considering 0 is the default value for CLI and it disables max execution time correctly there.

Putting in a high value is a workaround in the meantime.

@arnaud-lb
Copy link
Member

@Pixelshaped what version of MacOS do you have? Is it fully up to date? Does #13567 fix the issue for you?

@Mmaaikel @ejdanderson does 👍 mean that you still have the issue and #13567 fixes it?

@Herz3h
Copy link
Herz3h commented Apr 26, 2024

Same issue here, Sonoma, php 8.1 (or 8.2, or 8.3), Symfony project. Error happening in different files from vendor (seems to be random).

#13567 didn't work for me

@iluuu1994
Copy link
Member

@Herz3h That's somewhat surprising. How did you test the patch?

@Herz3h
Copy link
Herz3h commented Apr 29, 2024

@iluuu1994 I edited the php@8.2 homebrew formula, added patch at the end (patch just adds the OR condition like in the PR: || (defined(__aarch64__) && defined(__APPLE__)) )

@gabrielmagit
Copy link

I have the same issue, PHP 8.1, 8.2, and 8.3 (via Brew, updated in the past week), no matter what setting for max_execution_time, it thinks it's reached the max in a little less than half the time set (i.e. if 60 seconds, in about 20-some seconds, if 300 seconds, in about 170-some seconds)

MacBook Pro M1 Pro, Sonoma 14.4.1

@barto74
Copy link
barto74 commented May 17, 2024

I have the same issue with PHP 8.1, 8.2, 8.3 using Mamp PRO. The timeout error comes almost immediately. But it doesn't come always. Reloading the page sometimes makes it work, sometimes it throws the error again. Setting max_execution_time to -1 doesn't help.

I would really appreciate a fix or a workaround, developing is hardly possible now. Do we know if this is only an M1 Pro CPU issue? If it's not present on the M2/M3 etc, I would even replace my laptop.

MacBook Pro M1 Pro, Sonoma 14.4.1.

@NickIvanter
Copy link

Do we know if this is only an M1 Pro CPU issue?

I have exactly this same issue on M2. Using Mamp PRO. The issue began after upgrading to Sonoma.

@arnaud-lb
Copy link
Member
arnaud-lb commented May 17, 2024

I was able to reproduce the issue on an M2 Pro with latest Sonoma (14.5). This is reproducible in pure C, and appears to be triggered/accelerated by other syscalls: https://gist.github.com/arnaud-lb/012195a2fe4d3a2c1bff530a73ae6b11

Switching to ITIMER_REAL fixes the issue in both PHP and the C reproducer.

@arnaud-lb
Copy link
Member
arnaud-lb commented May 17, 2024

I've sent a message to internals and will merge #13567 if there are no objections.

@windaishi
Copy link
Contributor

@arnaud-lb Did they reply yet? 😅

@arnaud-lb
Copy link
Member

There are no objections so far, so I will merge next week if it continues like that.

@arnaud-lb
Copy link
Member

#13567 is now merged in 8.2 and 8.3.

@TheDigitalOrchard
Copy link
TheDigitalOrchard commented Jul 5, 2024

I was seeing this on macOS and it was very puzzling! Glad to see it fixed in the latest release!

@vudaltsov
Copy link
Contributor
vudaltsov commented Jul 12, 2024

I've just upgraded PHP to 8.3.9 on M1 and here's what I see when running Psalm:

> php -v
PHP 8.3.9 (cli) (built: Jul  2 2024 14:10:14) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.3.9, Copyright (c) Zend Technologies
    with Zend OPcache v8.3.9, Copyright (c), by Zend Technologies
    with blackfire v1.92.18~mac-x64-non_zts83, https://blackfire.io, by Blackfire

> tools/psalm/vendor/bin/psalm --show-info --no-diff '--no-cache'
Target PHP version: 8.1 (inferred from composer.json) Enabled extensions: random.
Scanning files...
Analyzing files...

░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░  60 / 236 (25%)
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 120 / 236 (50%)
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 180 / 236 (76%)
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░PHP Fatal error:  Maximum execution time of 0 seconds exceeded in /Users/vudaltsov/Projects/typhoon/typhoon/tools/psalm/vendor/vimeo/psalm/src/Psalm/Internal/Type/ParseTree.php on line 26
Fatal error: Maximum execution time of 0 seconds exceeded in /Users/vudaltsov/Projects/typhoon/typhoon/tools/psalm/vendor/vimeo/psalm/src/Psalm/Internal/Type/ParseTree.php on line 26

------------------------------
                              
       No errors found!       
                              
------------------------------
Psalm can automatically fix 6 of these issues.
Run Psalm again with 
--alter --issues=MissingParamType --dry-run
to see what it can fix.
------------------------------

Checks took 5.10 seconds and used 177.671MB of memory
Psalm was able to infer types for 99.4033% of the codebase

Never had this issue before. Could this be related?

@vudaltsov
Copy link
Contributor

Interestingly the error is not thrown when running psalm with --threads=1.

@highfalutin
Copy link

I'm still seeing the timeout error on my M1 mac, PHP 8.1.29. Was the fix only in 8.2 and 8.3?

@jdecool
Copy link
jdecool commented Jul 12, 2024

Was the fix only in 8.2 and 8.3?

It seems. PHP 8.1 is not actively maintained anymore.

I use Homebrew throught https://github.com/shivammathur/homebrew-php which contains a patch for 8.1

@samuel-nogueira-kununu
Copy link

I started seeing the same issue as @vudaltsov after upgrading from a 8.3.x version (don't remember which) to 8.3.9.
Also using homebrew PHP in a M3.
--threads=1 does stop that error for me as well.

@vudaltsov
Copy link
Contributor

Should we create a separate issue and a reproducer for this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.