On 10 March 2011 11:57, Thomas Martitz thomas.martitz@student.htw-berlin.de wrote:
On 10.03.2011 01:23, Lex Trotman wrote:
On 10 March 2011 00:45, Colomban Wendlinglists.ban@herbesfolles.org wrote:
Le 09/03/2011 05:31, Lex Trotman a écrit :
Hi Colomban,
Agree with most, comment on the rest below.
On 9 March 2011 14:00, Colomban Wendlinglists.ban@herbesfolles.org wrote:
Le 08/03/2011 22:29, Colomban Wendling a écrit :
Le 08/03/2011 19:58, Enrico Tröger a écrit : > > On Tue, 08 Mar 2011 11:06:16 +0100, Frank wrote: > >> Am 23.02.2011 01:10, schrieb Matthew Brush: >>> >>> For first thing, maybe we could enforce use/passing of those tools >>> mentioned and these before adding to release, examples: >>> http://www.splint.org/ >>> http://valgrind.org/info/tools.html >>> (suppression for GTK - >>> http://people.gnome.org/~johan/gtk.suppression) >>> http://www.gnu.org/software/indent/ (just for making coding styles >>> more consistent between plugins) http://check.sourceforge.net/ or >>> http://cutest.sourceforge.net/ or http://cunit.sourceforge.net/ >>> Perhaps some or all of these could be automated.
I don't suggest (yet) to use splint, because I haven't found it useful, it reported ways too much things (and some was false positive) to be really usable. However, I probably don't know how to configure it (and haven't the time to find out yet), so if somebody with experience with it can provide some hints, maybe we can add this too.
For Valgrind however, I doubt we can do anything automatically, because it's a runtime checker, and its output must generally be read with some care. Writing some sort of guidelines and howto is probably the way to go.
>> I like that idea. Can someone of you build up a howto on how to use >> it? >> I did try valgrind in past and wished for some advice ;) >> >> One this is done we can think of automatic tests with some of this >> tools. > > I, and obviously, Colomban as well, though indepdent from each other > :D, > recently played[1,2] with cppcheck. A small tool for static code > analysis which actually found a few things in the geany-plugins > repository. > > As I'm currently reworking the system to create the nightly builds, > we > could integrate such checks into the nightly builds, maybe run > cppcheck > on the sources after the builds and present the results somewhere on > nightly.geany.org.
I think it's a good idea.
I did a few checks, and this is what I suggest:
- run cppcheck on `make check` and abort if it detects an error;
- enable by default (though in a portable manner) some compiler flags
such as -Werror-implicit-function-declaration [1] [2]
After a little more thinking and testing, I suggest:
- -Werror=implicit-function-declaration (see above);
- -Werror=pointer-arith, to avoid portability issues with untyped
pointer arithmetic;
- -Wundef because preprocessor tests should rely on defined variables
or explicitly use ifdef;
- -Wshadow because shadowing symbols is a Bad Practice (and maybe it is
even non-standard, not quite sure however);
Agree for shadow of locals and parameters, agree for globals but only because all globals should be prefixed by a module prefix like the Geany_ symbols are. Is there any way to check this?
Not that I know, unfortunately. However maybe a dumb sed/grep might do the trick.
- -Waggregate-return because returning aggregate values is no good for
performances and it's not a Good Practice in C;
Why isn't it good? Performance should be as good if not better than returning multiple parameters via pointers and then assembling the aggregate in the caller. Why isn't it Good Practice?? (your caps :-) Its been standard ever since someone invented allocating the return variable on the stack prior to the call, so it can be referenced relative to the frame pointer.
Well... I must admit I'm not really pro on the subject, but AFAIK the main problem is ABI, because since the return value will not necessarily fit in registers the compiler is likely to generate a dummy parameter to the function (as you might have done yourself), but this implicitly, so changing the "natural" ABI of the function.
Functions that return too much to fit in the registers should use the stdcall ABI which defines an extra reference parameter for the aggregate. (Or rather the compiler should use it). There is no such thing as a "natural" ABI that is corrupted by aggregate returns, there are ABIs defined by Intel and other CPU makers for all the required call/return patterns. Yes its slower than returning something in a register, but if you have got to return an aggregate value thats too big you have to return it. Might as well let the compiler do the heavy lifting rather than having to declare a temporary and pass a pointer to it as an explicit parameter or otherwise return the components and assemble the aggregate in the caller.
Returning an aggregate is exactly the same as manually allocating the aggregate and passing a pointer to the function (in order to let it fill the aggregate) on most if not all ABIs. Without any difference in performance.
Exactly, summarised beautifully.
Of course the latter has the advantage that you can additionally return a failure indicating value (-1 or NULL) so this is why you generally don't see functions returning aggregates.
Yes, its the habit created by the standard libraries, and they did it that way because C doesn't have an out-of-band signaling mechanism to handle errors. So its all K&Rs fault... :-)
Cheers Lex
Best regards. _______________________________________________ Geany-devel mailing list Geany-devel@uvena.de http://lists.uvena.de/cgi-bin/mailman/listinfo/geany-devel