[Geany-devel] Geany memory behavior "suspected memory leak - ID: 3415254"
elextr at xxxxx
Sat Oct 15 00:16:14 UTC 2011
> Geany at least keeps all the GeanyDocuments alive and tries to re-use
> them (document_create() at line 564). This avoids having to re-allocate
> the document struct everytime, though it'll then never release this
> memory. However, either the user won't open so much files at once or
> she's likely to keep having a large amount of open files, so I don't
> think it's a (the?) problem -- and anyway I guess the allocation would
> be quite small.
But what that might be doing is preventing release of whole slabs back
to the system.
> However, we'd probably better use GSlice here (for fragmentation and
> maybe caching), and maybe even release the memory to GSlice so it does
> all the management, and could even release it to the system at some
> point if it makes sense to it.
We should probably just release anything we are finished with and let
the allocator do its thing, not try to second guess it, we'll always
be wrong :)
> Also note that we never release the global tags, so it's likely the
> first run loads those and be part of the 22Mb. However, on my
> installation the C global tags only use something like 400k in memory.
That probably comes under the category of "need" to keep.
>> 2. ignoring the first open, the amount of extra memory used to open
>> all the files varied from 0.4Mb to 2.9Mb. Average 1.1Mb or 5% of the
>> initial 22.8Mb and standard deviation of 0.665Mb or 3% of the 22.8Mb.
>> 3. Simple leaks are unlikely to cause the large deviation in the
>> memory increase, each open of the same set of files would tend to leak
>> the same amount. It is therefore likely that fragmentation effects
>> cause the large deviation, but it may not be the whole cause of the
> We are not the only possible source of leaks. At least Valgrind pointed
> GTK leaks quite a lot in the file chooser dialog.
Well Valgrind and GTK don't entirely get on, so false positives are
common. Also according to gdb, invoking the file dialog creates
another thread somewhere, so Valgrind may be seeing the thread stack.
> Also, it seems FontConfig, Pango or Scintilla (either can misuse the one
> down) have a few leaks -- I think it's not (much of) Scintilla's fault,
> but I don't know it very well to debug it. Some looks like one-time
> leaks, so probably unfreed initializations, some other seems to happen
> more often.
Scintilla certainly does once-only initialisations that are not
returned, thats a fairly common C++ idiom. And I would expect
fontfonfig to cache stuff too, so it is going to be hard to debug. I
think all we should do is report it if we are reasonably sure the
problem exists, even if we can't find the mechanism.
> Since I only speak of "definitely lost" leaks (as Valgrind calls them),
> this should not include too much false-positives, though it still can --
> we had a few "definitely lost" leaks on Geany that I have/could fix but
> that would not reduce the real memory usage since they would only be
> freed at closing time anyways.
Well, in this bug nobody is complaining about the actual memory usage,
just the growth over time.
>> Since Geany uses three different allocators, each of which has
>> differing policies for holding onto memory, it is going to be
>> difficult to separate real leaks from allocator effects.
> You could easily reduce this to 2 by using the G_SLICE=always-malloc
> environment variable when running the tests. This makes the GSlice API
> simply wrap plain malloc/free for easier debugging ;)
Tried with this, average growth was almost identical, but the
deviation was about half that previously observed. Makes sense.
>> For the bug reporter to have accumulated 300Mb of memory over "a few"
>> days would have needed about 500 file opens per day, but maybe
>> somewhat less as editing increases the memory usage.
> That's the problem of profiling and testing, we generally do this on
> unreal situations... either by lack of time or because it has other
> constraints, but heh, still not real.
Or onset of boredom :)
>> So I don't think we have to worry excessively that we have a major
>> leak, but keep an eye open for any possible problems.
> Agreed, but keeping a eye on memory usage is always a good thing :)
BTW, I noticed after opening all those files, often a few seconds
after they were open there was a few seconds of 400% cpu usage (its a
quad core machine). Since nothing we do uses this sort of parallel
processing, I wonder if its a memory manager busily trying to coalesce
adjacent chunks. After all 95% of the memory is re-used.
PS Colomban, did you see my question on mio use?
More information about the Devel