Nothing Special   »   [go: up one dir, main page]

bugThe GNU Hurd - Bugs: bug #15297, dd uses up all memory

 
 

bug #15297: dd uses up all memory

Submitter:  Samuel Thibault <sthibaul>
Submitted:  Mon 26 Dec 2005 10:36:15 PM UTC
   
 
Category:  None Severity:  3 - Normal
Priority:  5 - Normal Item Group:  None
Status:  Need Info Privacy:  Public
Assigned to:  None Originator Name: 
Open/Closed:  Open Reproducibility:  None
Size (loc):  None Planned Release:  None
Effort:  0.00
Wiki-like text discussion box: 


* Mandatory Fields

Post a Comment

Add a New Comment Rich Markup
   

Discussion

Fri 06 Jan 2006 12:51:20 PM UTC, comment #2: 

Yes, I can.

I have also investigated that after removing the file, not all the memory gets freed.

Soeren D. Schulze <sdschulze>
Fri 30 Dec 2005 06:16:58 PM UTC, comment #1: 

Can anyone reproduce this?

Alfred M. Szmidt <ams>
Mon 26 Dec 2005 10:36:15 PM UTC, original submission:  

From http://bugs.debian.org/37945

From: -email is unavailable-
To: -email is unavailable-
Subject: hurd: dd uses up all memory
Date: Tue, 18 May 1999 21:49:04 +0200

Package: hurd
Version: N/A

I have 48 MB, after booting about 36 MB are still free.

$ dd if=/dev/zero of=/tmp/image bs=1024k count=50
(default pager): dropping data request because of previous paging errors
(default pager): dropping data request because of previous paging errors
(default pager): dropping data request because of previous paging errors
(default pager): dropping data request because of previous paging errors
...
scrolls heavily, dies, hangs.

Then I reboot, and look at /tmp/image. It's about 34MB big (so about the
size of memory I had free).

Then I try again, but this time I am more clever. I activate swap. I have a
128 MB swap partition.

$ swapon /dev/hd2s1
$ dd if=/dev/zero of=/tmp/image bs=1024k count=50
$

This command takes a long time, btw, partly because it starts to use the
swap after 36 MB :)

Writing to a file does take as much memory as you write. This shouldn't be
the case, when no memory is available, it should start to flush the disk
cache or so. I have not tried to write to /dev/null (this is left as an
exercise to the reader).

Thanks,
Marcus

From: Roland McGrath <roland@gnu.org>
To: brinkmd@debian.org, -email is unavailable-
Cc: -email is unavailable-
Subject: Re: Bug #37945: hurd: dd uses up all memory
Date: Tue, 18 May 1999 17:39:22 -0400

I have reproduced this.  (It does not happen with /dev/null, that works
fine.)  It appears to be some sort of leak in ext2fs (or libdiskfs, I
haven't been able to test ufs).  The memory-related info from ps is bogus
(at least it is for me), but vmstat clearly shows swap being eaten up as
the dd runs, and it doesn't come back when it's finished.  Running vminfo
on the extfs process shows a lot more pages allocated after than before.
(If you dd an amount about your amount of physical RAM, you notice some
obvious thrashing, too.)

Note that with a usage pattern like this, there is a huge explosion of
threads in the filesystem (mine has 914).  That is a known issue and not
actually a leak.  But there is also a leak here.  Repeating the large dd,
my ext2fs process never gets more threads, but it does end up with some
more pages allocated in its address space (I think they are anonymous pages).


From: Mark Kettenis <kettenis@wins.uva.nl>
To: -email is unavailable-
Cc: brinkmd@debian.org, 37945@bugs.debian.org, bug-hurd@gnu.org, -email is unavailable-
Subject: Re: Bug #37945: hurd: dd uses up all memory
Date: Sun, 30 May 1999 20:36:43 +0200 (CEST)

   Date: Tue, 18 May 1999 17:39:22 -0400
   From: Roland McGrath <roland@gnu.org>

   I have reproduced this.  (It does not happen with /dev/null, that works
   fine.)  It appears to be some sort of leak in ext2fs (or libdiskfs, I
   haven't been able to test ufs).  The memory-related info from ps is bogus
   (at least it is for me), but vmstat clearly shows swap being eaten up as
   the dd runs, and it doesn't come back when it's finished.  Running vminfo
   on the extfs process shows a lot more pages allocated after than before.
   (If you dd an amount about your amount of physical RAM, you notice some
   obvious thrashing, too.)

I think the fact that the swap doesn't come back when the dd is
finished is related to the fact that we use memory objects that may be
cached.  This means that the kernel keeps the memory object around for
a while, even if it is no longer mapped.  You can see that the swap
comes back if you remove the file created with dd or if you terminate
the filesystem server.

The thrashing that accours of you dd an amount about the amount of
physical RAM indicates that there is something wrong with the way
paging is done.  Either a kernel bug or a bug in the default pager.

   Note that with a usage pattern like this, there is a huge explosion
   of threads in the filesystem (mine has 914).  That is a known issue
   and not actually a leak.  But there is also a leak here.  Repeating
   the large dd, my ext2fs process never gets more threads, but it
   does end up with some more pages allocated in its address space (I
   think they are anonymous pages).

I repeatedly create a 10 MB file using

   $ dd if=/dev/zero of=file bs=1024k count=10

The first time the number of threads increases a lot, to about 60, and
the amount of free swap decreases by approximately 10 MB.  The second
time it increases to about 80.  Then it increases in small steps to
about 95.  For a while after the dd command finishes the amount of
virtual memory used is indeed higher than before, but after a while it
stabilizes.  The difference, compared to the amount of virtual memory
used before the dd command, seems to be very close to 16 * increase of
number of threads (in pages).  No surprise since the default stack
size of a thread is 16 pages.


By the way.  Do you realize that the explosion of threads in the
filesystem is very bad.  The stack space alone used by your 914
threads is 16 4 914 = 60 MB.  Are there any thoughts on fixing
this explosion of threads?

Mark

From: Roland McGrath <roland@gnu.org>
To: Mark Kettenis <kettenis@wins.uva.nl>
Cc: brinkmd@debian.org, 37945@bugs.debian.org, bug-hurd@gnu.org, -email is unavailable-
Subject: Re: Bug #37945: hurd: dd uses up all memory
Date: Sun, 30 May 1999 14:54:54 -0400

> By the way.  Do you realize that the explosion of threads in the
> filesystem is very bad.  The stack space alone used by your 914
> threads is 16 4 914 = 60 MB.


That is only virtual space, and swap is lazily allocated.  I suspect that
in reality most or all threads in the filesystem only touch one or two
pages of stack, so the rest is just eating address space.

From the nature of the code, I suspect that one could determine from static
analysis a static bound on thread stack size that might actually be used by
the filesystem servers.  Then you can tune the thread stack size.

> Are there any thoughts on fixing this explosion of threads?


Thomas will have to give the details.  My understanding is that the problem
is that the kernel generates huge numbers of one-page pageout requests more
or less simultaneously.  Ideally, the kernel would produce fewer requests
for ranges of multiple pages, and that might by itself suffice; but I don't
know how complicated implementing that in the kernel would be.  The other
alternative is for the server to throttle the requests in some way to put a
limit on the number of simultaneous paging requests.  I think the way to do
that is to remove the port from the portset once there are a given number
of threads handling paging requests to that port, and then reinsert it when
enough outstanding requests finish.

From: Marcus Brinkmann <Marcus.Brinkmann@ruhr-uni-bochum.de>
To: Roland McGrath <roland@gnu.org>
Cc: Mark Kettenis <kettenis@wins.uva.nl>, 37945@bugs.debian.org, -email is unavailable-
Subject: Re: Bug #37945: hurd: dd uses up all memory
Date: Sun, 30 May 1999 22:28:53 +0200

On Sun, May 30, 1999 at 02:54:54PM -0400, Roland McGrath wrote:

> > Are there any thoughts on fixing this explosion of threads?
>
> Thomas will have to give the details.  My understanding is that the problem
> is that the kernel generates huge numbers of one-page pageout requests more
> or less simultaneously.  Ideally, the kernel would produce fewer requests
> for ranges of multiple pages, and that might by itself suffice; but I don't
> know how complicated implementing that in the kernel would be.


I think this may explain a big performance penalty in the Hurd: Even when
reading or writing blocks sequentially, I experience something that can best
be described as a "beat". The disk activation is faster<->slower<->faster...
I assume this has to do with the way the one page pageout requests are send
and received.

Collecting multiple requests in ranges would be a good thing. Not only would
it help with the memory/threads used, but also we would hopefully gain a
performance boost.

Thanks,
Marcus

From: -email is unavailable- (Thomas Bushnell, BSG)
To: -email is unavailable-
Subject: Filesystem activity consumes memory
Date: Wed, 16 Jun 1999 12:57:27 -0400 (EDT)

I have checked in a fix to ext2fs that should prevent filesystem
activity from endlessly chewing up swap space.

Thomas

From: Marcus Brinkmann <Marcus.Brinkmann@ruhr-uni-bochum.de>
To: -email is unavailable-
Cc: -email is unavailable-
Subject: Re: Bug #37945 acknowledged by developer (hurd: dd uses up all memory)
Date: Thu, 17 Jun 1999 23:25:43 +0200

reopen 37945
thanks

On Wed, Jun 16, 1999 at 12:03:15PM -0500, Debian Bug Tracking System wrote:

>
> I have checked in a fix to ext2fs that should prevent filesystem
> activity from endlessly chewing up swap space.
>
> Thomas


Unfortunately, I can still reproduce this with 48 MB memory, no swap, and
writing a 50 MB file to disk. It even seems to be worse than before, it dies
after writing only 18 MB (before the change it managed twice as much).

Thanks,
Marcus

From: -email is unavailable- (Thomas Bushnell, BSG)
To: -email is unavailable-
Cc: bug-hurd@gnu.org, -email is unavailable-
Subject: Re: Bug #37945: hurd: dd uses up all memory
Date: Sat, 19 Jun 1999 16:04:32 -0400 (EDT)

It would be helpful, I think, to know whether this bug afflicts only
ext2fs or both ext2fs and ufs.  The paging implementation is slightly
different between the two in some key ways, and I'd really be
interested in finding out whether it's ext2fs-specific.

Thomas

From: -email is unavailable- (Thomas Bushnell, BSG)
To: -email is unavailable-
Subject: Re: Bug #37945: hurd: dd uses up all memory
Date: Tue, 29 Jun 1999 05:08:31 -0400 (EDT)

To get the right information into the bug log:

I have made significant paging changes to Mach which I believe will
solve this problem; I'm awaiting confirmation from the people who've
been observing it.

Thomas


From: Marcus Brinkmann <Marcus.Brinkmann@ruhr-uni-bochum.de>
To: "Thomas Bushnell, BSG" <tb@MIT.EDU>, -email is unavailable-
Subject: Re: Bug #37945: hurd: dd uses up all memory
Date: Tue, 29 Jun 1999 18:58:51 +0200

On Tue, Jun 29, 1999 at 05:08:31AM -0400, Thomas Bushnell, BSG wrote:

>
> To get the right information into the bug log:
>
> I have made significant paging changes to Mach which I believe will
> solve this problem; I'm awaiting confirmation from the people who've
> been observing it.


I tried booting it, but it is panicing. Some problem with the pager. I could
write down all the numbers from the screen if it is useful to you.

Thanks,
Marcus

From: -email is unavailable- (Thomas Bushnell, BSG)
To: Marcus Brinkmann <Marcus.Brinkmann@ruhr-uni-bochum.de>
Cc: -email is unavailable-
Subject: Re: Bug #37945: hurd: dd uses up all memory
Date: 29 Jun 1999 14:07:39 -0400

Marcus Brinkmann <Marcus.Brinkmann@ruhr-uni-bochum.de> writes:

> I tried booting it, but it is panicing. Some problem with the pager. I could
> write down all the numbers from the screen if it is useful to you.


Oh, joy, I guess I'll have to start debugging it.  ick.

From: Marcus Brinkmann <Marcus.Brinkmann@ruhr-uni-bochum.de>
To: "Thomas Bushnell, BSG" <tb@MIT.EDU>, -email is unavailable-
Subject: Re: Bug #37945: hurd: dd uses up all memory
Date: Thu, 22 Jul 1999 17:07:58 +0200

On Tue, Jun 29, 1999 at 05:08:31AM -0400, Thomas Bushnell, BSG wrote:

>
> To get the right information into the bug log:
>
> I have made significant paging changes to Mach which I believe will
> solve this problem; I'm awaiting confirmation from the people who've
> been observing it.


okay, I verified now that it works and doesn't.

With any swap enabled, I tried with 16 MB swap file, it worked fine! I was
able to create 65 MB file with dd, on a machine with 44 MB free RAM and 16
MB swap.

However, with no swap at all, it still crashes. I tried to write a 45 MB
file on my machine (44 MB free, no swap), and it crashed.

Apart from that, the general I/O performance seems to be better now.

Thanks,
Marcus


Samuel Thibault <sthibaul>
Group administrator

 

Attached Files

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Dependencies

Depends on the following items: None found

Items that depend on this one: None found

 

Mail Notification Carbon-Copy List

CC list is empty

 

Votes

There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

Only logged-in users can vote.

 

History

Follows 1 latest change.

Date Changed by Updated Field Previous Value => Replaced by
2005-12-30 ams StatusNone Need Info

Back to the top

Powered by Savane 3.14-8aba.
Corresponding source code