Hi all,
From discussions elsewhere, such as [1], it sounds like one of the
things holding back Geany development right now is a need for more testing.
I have some spare time that I can dedicate to exploratory testing of PRs to Geany and Geany-Plugins. I'm not a QA professional, but I am a programmer, I use a range of Geany features daily, and I understand Geany's code.
How can I test PRs in a way that would really help them get merged?
In particular:
1. How can I determine that a PR is mostly blocked on testing, and is likely to be merged when positive testing results come in? Some PRs are marked as "approved" in GitHub yet are not merged -- is that it?
2. How can I communicate my results to the satisfaction of Geany committers? For example, I could write up some kind of a report: an outline of what I tried, with screenshots of what I got -- would that help?
Thank you.
[1] https://github.com/geany/geany/pull/1246#issuecomment-290047712
Hi Vasily,
On 28 April 2017 at 06:51, Vasiliy Faronov vfaronov@gmail.com wrote:
Hi all,
From discussions elsewhere, such as [1], it sounds like one of the things holding back Geany development right now is a need for more testing.
I can only speak from my point of view, but I believe it probably applies to at least some other committers. I can afford a few minutes at a time through the day to answer emails and IRC and comment on PRs. Even inspect the code for simple PRs (which I mark LGBI Looks Good By Inspection).
But I can't often afford a bigger block of time to confirm the fault exists in my system, grab the PR, test the fault no longer exists, merge to master, ooops, master is not up to date, pull changes, back to merge, push to github. Sure that process is more polished for those who have the time to commit more often, but they are also short of time.
Personally I am willing to skip testing myself for PRs that are complete and immediately committable, not too complex, or controversial and its been tested by someone who seems reliable (other than the OP, its way too easy to miss problems on your own work).
Also I can (have to) accept others testing when I don't have the setup to test (Windows, weird desktops, and networked file systems being the prime examples).
This is the approach I use for another project where I don't want to load the Gb of dependencies to fully test changes on this machine.
I have some spare time that I can dedicate to exploratory testing of PRs to Geany and Geany-Plugins. I'm not a QA professional, but I am a programmer, I use a range of Geany features daily, and I understand Geany's code.
Testing when you use it is by far the best way, so long as your usage includes the feature (one of the issues with the PR you referenced, its off by default and simply not wanted in my workflow, so it will only ever get artificial testing).
How can I test PRs in a way that would really help them get merged?
In particular:
- How can I determine that a PR is mostly blocked on testing, and is
likely to be merged when positive testing results come in? Some PRs are marked as "approved" in GitHub yet are not merged -- is that it?
The review and approval system in github is pretty new and we don't use it consistently yet, so no its not a reliable indicator. Unfortunately its probably going to require slightly more judgement:
1. has someone checked it and posted that it looks ok (I try to use LGBI consistently but its not universal)
2. have any requested changes been made, you will find there are a distressingly high proportion of PRs where a small change by the OP would make it committable, but they don't seem to do it.
3. is it non-controversial, or has it come to a consensus (like the one you referenced)
- How can I communicate my results to the satisfaction of Geany
committers? For example, I could write up some kind of a report: an outline of what I tried, with screenshots of what I got -- would that help?
For simple PRs just posting "I have been running with this for the last week/month/whatever using it often and it works fine" is likely to be sufficient. For more complex ones, then a description of how you tested it is likely to be needed. Screenshots would only be relevant if the purpose of the change was to affect the way something looked.
Things that interact with the operating system or files are the most difficult since they should also be tested on Windows (which the Geany team don't regularly use) and/or remote filesystems (there seem to be a lot of users of SSHFS in particular).
In all cases just posting on the PR "I am testing this will report back" is good. Then if anything special is needed or if its not ready somebody will probably notice and post a request.
Thanks Lex
Thank you.
[1] https://github.com/geany/geany/pull/1246#issuecomment-290047712
-- Vasiliy _______________________________________________ Devel mailing list Devel@lists.geany.org https://lists.geany.org/cgi-bin/mailman/listinfo/devel
Am 27.04.2017 um 22:51 schrieb Vasiliy Faronov:
Hi all,
From discussions elsewhere, such as [1], it sounds like one of the things holding back Geany development right now is a need for more testing.
Helping to test PRs is truly needed, and much appreciated.
However, I do think that Geany lacks also actual developers that cna merge stuff. I feel the current team is afraid of merging non-trivial changes, leaving even semi-complex patches to Colomban. Unfortunately Colomban has little time these days, too, so we're kind of stuck. There are lots of PRs that have recent activity from the authors and are tested appropriately but still don't get attention from developers.
So I think we need more people that can push code to Geany directly, effectively dividing the workload onto more people. It's just too much work for a single developer, especially these days.
Unless this situation improves, I'm afraid that intensive testing of PRs is nice but kind of a wasted effort. This is worsened by the fact that "unpreviliged" testers can't assign labels in Github, it's really hard to get an overview about which PRs have received extended testing.
In the meantime, we're scaring contributors away because contributes aren't looked at in a timely manner.
Take this as an application. I would love to actively help if I'm granted push or github-label-set access.
Best regards.
On 2017-04-28 02:35 PM, Thomas Martitz wrote:
Am 27.04.2017 um 22:51 schrieb Vasiliy Faronov:
Hi all,
From discussions elsewhere, such as [1], it sounds like one of the things holding back Geany development right now is a need for more testing.
Helping to test PRs is truly needed, and much appreciated.
However, I do think that Geany lacks also actual developers that cna merge stuff. I feel the current team is afraid of merging non-trivial changes, leaving even semi-complex patches to Colomban. Unfortunately Colomban has little time these days, too, so we're kind of stuck. There are lots of PRs that have recent activity from the authors and are tested appropriately but still don't get attention from developers.
My general problem is that we don't have a unstable/development branch per se, nor proper automated testing, and I don't want to break master so I won't merge a single thing without testing it thoroughly myself. This can turn a 5-10 minute merge into a several hours or more testing session, requiring special setups and re-compiling Geany on 3 different OSes, etc.
Travis CI is great, but unless it can run make check with loads of static analysis and runtime analysis while it runs unit tests and such, it's basically just saying the code compiles. As we all know, it's relatively easy to make C code that compiles but crashes horribly at runtime in weird corner cases (off by one, null deref, etc.).
Personally I'd feel a lot better merging PRs I haven't thoroughly tested if we had:
- Clang static analyzer during the build - A Git hook or manual use of clang-format or other formatter to prevent the "extra white space" or "wrong comment style" type of issues that commonly occur in PRs. - Ability for PRs to come with tests (requires testing support). - Linking in Clang's address & memory sanitizers while running all of the tests.
Just some thoughts.
Regards, Matthew Brush
On 29 April 2017 at 09:55, Matthew Brush mbrush@codebrainz.ca wrote:
On 2017-04-28 02:35 PM, Thomas Martitz wrote:
Am 27.04.2017 um 22:51 schrieb Vasiliy Faronov:
Hi all,
From discussions elsewhere, such as [1], it sounds like one of the things holding back Geany development right now is a need for more testing.
Helping to test PRs is truly needed, and much appreciated.
However, I do think that Geany lacks also actual developers that cna merge stuff. I feel the current team is afraid of merging non-trivial changes, leaving even semi-complex patches to Colomban. Unfortunately Colomban has little time these days, too, so we're kind of stuck. There are lots of PRs that have recent activity from the authors and are tested appropriately but still don't get attention from developers.
My general problem is that we don't have a unstable/development branch per se, nor proper automated testing, and I don't want to break master so I won't merge a single thing without testing it thoroughly myself. This can turn a 5-10 minute merge into a several hours or more testing session, requiring special setups and re-compiling Geany on 3 different OSes, etc.
I have to agree with Matthew that:
1. Nobody wants to break master because its what everybody is using. Problem is that if we had a development branch nobody would be using it because it might break, so its insufficiently tested. I don't have a solution to that.
2. I am more willing to accept others testing and to make a judgement call about testing on all platforms. I have used that approach successfully on other projects where I couldn't personally test some configurations. But I understand where Matthew is coming from regarding the amount of work to do a good testing job.
3. A thorough test is becoming too big a job, and that is even worse for the more complex PRs that Thomas mentions. Simply don't have the time. And for changes to the plugin interface that need a plugin to test, well, unless the OP provides such a plugin, it just isn't going to happen.
Travis CI is great, but unless it can run make check with loads of static analysis and runtime analysis while it runs unit tests and such, it's basically just saying the code compiles. As we all know, it's relatively easy to make C code that compiles but crashes horribly at runtime in weird corner cases (off by one, null deref, etc.).
Personally I'd feel a lot better merging PRs I haven't thoroughly tested if we had:
- Clang static analyzer during the build
- A Git hook or manual use of clang-format or other formatter to prevent
the "extra white space" or "wrong comment style" type of issues that commonly occur in PRs.
- Ability for PRs to come with tests (requires testing support).
- Linking in Clang's address & memory sanitizers while running all of the
tests.
Geany is almost entirely an interactive application, so until interactive tests are possible I don't think technical tests like these will add a great deal to the committability of PRs. Clangalizers and sanitizers and formatters won't tell you that the PR actually puts 'z' in the buffer instead of 'a'.
Perhaps Columban knows more about using the accessibility framework for testing now Scintilla supports it?
Cheers Lex
Just some thoughts.
Regards, Matthew Brush
Devel mailing list Devel@lists.geany.org https://lists.geany.org/cgi-bin/mailman/listinfo/devel
On 2017-04-28 05:35 PM, Lex Trotman wrote:
On 29 April 2017 at 09:55, Matthew Brush mbrush@codebrainz.ca wrote:
On 2017-04-28 02:35 PM, Thomas Martitz wrote:
Am 27.04.2017 um 22:51 schrieb Vasiliy Faronov:
Hi all,
From discussions elsewhere, such as [1], it sounds like one of the things holding back Geany development right now is a need for more testing.
Helping to test PRs is truly needed, and much appreciated.
However, I do think that Geany lacks also actual developers that cna merge stuff. I feel the current team is afraid of merging non-trivial changes, leaving even semi-complex patches to Colomban. Unfortunately Colomban has little time these days, too, so we're kind of stuck. There are lots of PRs that have recent activity from the authors and are tested appropriately but still don't get attention from developers.
My general problem is that we don't have a unstable/development branch per se, nor proper automated testing, and I don't want to break master so I won't merge a single thing without testing it thoroughly myself. This can turn a 5-10 minute merge into a several hours or more testing session, requiring special setups and re-compiling Geany on 3 different OSes, etc.
I have to agree with Matthew that:
- Nobody wants to break master because its what everybody is using.
Problem is that if we had a development branch nobody would be using it because it might break, so its insufficiently tested. I don't have a solution to that.
- I am more willing to accept others testing and to make a judgement
call about testing on all platforms. I have used that approach successfully on other projects where I couldn't personally test some configurations. But I understand where Matthew is coming from regarding the amount of work to do a good testing job.
- A thorough test is becoming too big a job, and that is even worse
for the more complex PRs that Thomas mentions. Simply don't have the time. And for changes to the plugin interface that need a plugin to test, well, unless the OP provides such a plugin, it just isn't going to happen.
Travis CI is great, but unless it can run make check with loads of static analysis and runtime analysis while it runs unit tests and such, it's basically just saying the code compiles. As we all know, it's relatively easy to make C code that compiles but crashes horribly at runtime in weird corner cases (off by one, null deref, etc.).
Personally I'd feel a lot better merging PRs I haven't thoroughly tested if we had:
- Clang static analyzer during the build
- A Git hook or manual use of clang-format or other formatter to prevent
the "extra white space" or "wrong comment style" type of issues that commonly occur in PRs.
- Ability for PRs to come with tests (requires testing support).
- Linking in Clang's address & memory sanitizers while running all of the
tests.
Geany is almost entirely an interactive application, so until interactive tests are possible I don't think technical tests like these will add a great deal to the committability of PRs.
If the tests just test functions, all it needs is to get Geany started up, then the tests can call the new/changed functions testing with different inputs and such. There are at least two PRs to do similar.
Clangalizers and sanitizers and formatters won't tell you that the PR actually puts 'z' in the buffer instead of 'a'.
No, but they'll catch a number of runtime bugs that are often hard to identify upon basic code inspection or manual testing.
Perhaps Columban knows more about using the accessibility framework for testing now Scintilla supports it?
There are several UI testing frameworks that work with GTK+, though I've not used any: autopilot, dogtail, and LDTP.
I don't think we really need fully automatic UI testing (seems like too much work), but we could get a long way just testing at the function level, ensuring functions uphold their contract and flexing them with unusual inputs. Making a testable function usually means writing it better too, avoiding global state and writing more "pure" functions, and making functions do one thing and not writing huge functions or many small functions.
Regards, Matthew Brush
...
Geany is almost entirely an interactive application, so until interactive tests are possible I don't think technical tests like these will add a great deal to the committability of PRs.
If the tests just test functions, all it needs is to get Geany started up, then the tests can call the new/changed functions testing with different inputs and such. There are at least two PRs to do similar.
Sadly Geany isn't a pure functional program, most functions leave messy side-effects on global data, the Scintilla buffers :(
So you need to be able to examine those.
Clangalizers and sanitizers and formatters won't tell you that the PR actually puts 'z' in the buffer instead of 'a'.
No, but they'll catch a number of runtime bugs that are often hard to identify upon basic code inspection or manual testing.
Perhaps Columban knows more about using the accessibility framework for testing now Scintilla supports it?
There are several UI testing frameworks that work with GTK+, though I've not used any: autopilot, dogtail, and LDTP.
I don't think we really need fully automatic UI testing (seems like too much work), but we could get a long way just testing at the function level, ensuring functions uphold their contract and flexing them with unusual inputs. Making a testable function usually means writing it better too, avoiding global state and writing more "pure" functions, and making functions do one thing and not writing huge functions or many small functions.
We really NEED automatic UI testing and we NEED function unit testing, but realistically we are not going to get either. If we don't have enough resources to just run and test PRs we don't have the resources to add these.
Hence my suggestions of purely social engineering in previous posts.
Cheers Lex
Regards, Matthew Brush
Devel mailing list Devel@lists.geany.org https://lists.geany.org/cgi-bin/mailman/listinfo/devel
On 2017-04-28 06:35 PM, Lex Trotman wrote:
...
Geany is almost entirely an interactive application, so until interactive tests are possible I don't think technical tests like these will add a great deal to the committability of PRs.
If the tests just test functions, all it needs is to get Geany started up, then the tests can call the new/changed functions testing with different inputs and such. There are at least two PRs to do similar.
Sadly Geany isn't a pure functional program, most functions leave messy side-effects on global data, the Scintilla buffers :(
So you need to be able to examine those.
You can, the tests are just regular extra functions called at runtime, you have access to all state that normal code does, it just makes it more trouble to setup/assert that state. When you have this in mind while writing a test for the new/changed function, you're more likely to make it more "pure" and single-task specific. The end result is better code and more testable code, which would gradually spread through parts of the codebase.
Clangalizers and sanitizers and formatters won't tell you that the PR actually puts 'z' in the buffer instead of 'a'.
No, but they'll catch a number of runtime bugs that are often hard to identify upon basic code inspection or manual testing.
Perhaps Columban knows more about using the accessibility framework for testing now Scintilla supports it?
There are several UI testing frameworks that work with GTK+, though I've not used any: autopilot, dogtail, and LDTP.
I don't think we really need fully automatic UI testing (seems like too much work), but we could get a long way just testing at the function level, ensuring functions uphold their contract and flexing them with unusual inputs. Making a testable function usually means writing it better too, avoiding global state and writing more "pure" functions, and making functions do one thing and not writing huge functions or many small functions.
We really NEED automatic UI testing and we NEED function unit testing, but realistically we are not going to get either. If we don't have enough resources to just run and test PRs we don't have the resources to add these.
The contributors add the tests flexing the PR changes, giving the person merging the change more confidence and less reason to test every little corner case themselves, and are automated and repeatable to ensure the assumptions those tests make are not broken by other changes in the future. Instead of as the OP suggested, writing up a prose testing report by hand, they just write a test function that tests the assumptions they have checked, also showing missing assumptions.
Regards, Matthew Brush
As an exercise I scanned the top few (highest numbered) PRs to assess their commitability from MY personal point of view, found one immediately committable and did, the rest are:
#1482 still open question if it should revert to previous bad behaviour.
#1481 work in progress
#1478 improvement suggested but commitable then
#1471 havn't had time to look closely, lots of files modified (ok many are icons, but still) and not a feature that I would test in my workflow
#1470 havn't looked at it closely, but at first glance its ok, has an open "cannot reproduce" on a test report, but I don't use snippets, so it would only get cursory testing by me
#1465 I have only a vague idea what its doing and no idea how to test it other than compiling it (which Travis has already done)
#1461 and #1457 work in progress
#1456 simply havn't had time to look at it
#1450 suggested wiki instead of adding to core as others have criticised adding more small filetypes to Geany, undecided
#1445 review tantrum (see comments on it) :)
#1430 has unfixed comments and Travis failures
#1414 support the idea, but its a big change, in a sensitive area (writing files safely is the PRIMARY purpose of an editor), and I don't have any networked files to test with. Also although it explicitly doesn't change handling on Windows it would need testing to make sure it didn't accidentally break something there.
#1402 don't know VHDL and testing it would need testing it didn't affect anything else so time issues and needs actual test material
#1400 still has a review open (though the changes have been made I think) simply needs time to test there are no unexpected effects of the signal change
That will do, spent more time than I wanted already. I guess there are only a couple that are specifically testing related. Some more are due to the problem Matthew pointed out, don't want to break master, so cautious of complex seeming changes. The rest are in the OPs court.
Cheers Lex
On 29.04.2017 03:35, Lex Trotman wrote:
We really NEED automatic UI testing and we NEED function unit testing, but realistically we are not going to get either. If we don't have enough resources to just run and test PRs we don't have the resources to add these.
Would it help if we can find some BA or MA or some external to to spent a few week/month full time on this?
Cheers, Frank
On 1 May 2017 at 23:32, Frank Lanitz frank@frank.uvena.de wrote:
On 29.04.2017 03:35, Lex Trotman wrote:
We really NEED automatic UI testing and we NEED function unit testing, but realistically we are not going to get either. If we don't have enough resources to just run and test PRs we don't have the resources to add these.
Would it help if we can find some BA or MA or some external to to spent a few week/month full time on this?
Hi Frank,
Is that an offer? It would be hard to decide what they should work on, since as I said we need everything, but that would be a wonderful problem to have.
Cheers Lex
Cheers, Frank
Devel mailing list Devel@lists.geany.org https://lists.geany.org/cgi-bin/mailman/listinfo/devel
Am 2017-05-03 12:45, schrieb Lex Trotman:
On 1 May 2017 at 23:32, Frank Lanitz frank@frank.uvena.de wrote:
On 29.04.2017 03:35, Lex Trotman wrote:
We really NEED automatic UI testing and we NEED function unit testing, but realistically we are not going to get either. If we don't have enough resources to just run and test PRs we don't have the resources to add these.
Would it help if we can find some BA or MA or some external to to spent a few week/month full time on this?
Is that an offer? It would be hard to decide what they should work on, since as I said we need everything, but that would be a wonderful problem to have.
Well, actually I was thinking about last days but don't have a final idea by now. My rough idea is, as we have the association now, in theory we got a legal which can hire a freelancer to do unloved stuff like fixing very Windows/apple specific bugs no one of distributors is able/willing to do. Given this would help something, there are many questions to solve first. The biggest 3 in my mind currently are:
1) Who could do the work. Who is experienced enough or willing the pain to fix those kind of things
2) What tasks needs to be done (as in never ever anyone else would touch it by there own) and how can we consider them as been solved successful. Also this could be long running maintenance tasks at our infrastructure.
3) Who is going to pay for it. We got some money from our great donators (big thanks to everyone ever contributed code, money or time), but it would not be enough to pay maybe 1/2 month of work. So we might would need to collect some extra money.
One possible solution: Parts of this would be a great thing for a MA/BA majoring in something with software development.
So no final idea by now, but that's why I've asked ;)
Cheers, Frank
Am 29.04.2017 um 02:35 schrieb Lex Trotman:
On 29 April 2017 at 09:55, Matthew Brush mbrush@codebrainz.ca wrote:
On 2017-04-28 02:35 PM, Thomas Martitz wrote:
Am 27.04.2017 um 22:51 schrieb Vasiliy Faronov:
Hi all,
From discussions elsewhere, such as [1], it sounds like one of the things holding back Geany development right now is a need for more testing.
Helping to test PRs is truly needed, and much appreciated.
However, I do think that Geany lacks also actual developers that cna merge stuff. I feel the current team is afraid of merging non-trivial changes, leaving even semi-complex patches to Colomban. Unfortunately Colomban has little time these days, too, so we're kind of stuck. There are lots of PRs that have recent activity from the authors and are tested appropriately but still don't get attention from developers.
My general problem is that we don't have a unstable/development branch per se, nor proper automated testing, and I don't want to break master so I won't merge a single thing without testing it thoroughly myself. This can turn a 5-10 minute merge into a several hours or more testing session, requiring special setups and re-compiling Geany on 3 different OSes, etc.
I have to agree with Matthew that:
- Nobody wants to break master because its what everybody is using.
Problem is that if we had a development branch nobody would be using it because it might break, so its insufficiently tested. I don't have a solution to that.
master *is* the development branch. It's not a stable branch that must not be broken at all costs. It's also not true that everyone is using master. The vast majority is using releases, and in fact we do regular releases so that we can use master as a true development branch. Even I don't use master (a very regular contributor) for my clone that I use daily. I always fork the last release, merge my changes, and backport individual commits from master (via cherry-pick). Of course I develop features based on master, so I do test the master branch on a regular basis.
So yes, if you are afraid of doing development on the development branch, it's clear that we're struggling to get anything done. Sure, one can expect that PRs are perfect before getting merged, but the current situation shows that this is not working if you want to get something done in a timely manner.
From another angle, both of you could easily create a development branch. But you didn't so far. Anyway, how is that workflow supposed to work? If lots of PRs go through an intermediate branch then merging that intermediate branch into master is going to be a nightmare too.
Best regards.
...
I have to agree with Matthew that:
- Nobody wants to break master because its what everybody is using.
Problem is that if we had a development branch nobody would be using it because it might break, so its insufficiently tested. I don't have a solution to that.
master *is* the development branch. It's not a stable branch that must not be broken at all costs. It's also not true that everyone is using master. The vast majority is using releases, and in fact we do regular releases so that we can use master as a true development branch.
The vast majority are therefore not testing anything in master prior to release, so they are not helping stabilise the release. Thats no help. (Of course users are not expected to help stabilise the release).
Even I don't use master (a very regular contributor) for my clone that I use daily. I always fork the last release, merge my changes, and backport individual commits from master (via cherry-pick). Of course I develop features based on master, so I do test the master branch on a regular basis.
So you don't do much testing of any changes in master, except those you choose to backport to your day to day version, or that you happen to use when testing your own Geany development.
Some of us do use git HEAD (or close to, I'm a bit behind ATM) so we do check what will be in the next release, at least for those things in our normal workflow. Otherwise if nobody used HEAD it would be extremely lightly tested come release time.
Besides emacs and atom I havn't looked at how other editor projects do it.
But certainly emacs has most of their devs using HEAD or close to it, and they also try to be careful about what they commit. But of course thats not a github project so they get fewer external contributions and most have been through mailing list hell before they get applied.
Atom takes a different approach of being very modular and having each part in a separate repository, over 200 according to their CONTRIBUTING.md. Therefore its more akin to geany-plugins, where individual parts can be easily handled separately. And they seem to make heavy use of feature branches in the main repos. Don't think that will work with a monolithic C application like Geany.
So yes, if you are afraid of doing development on the development branch, it's clear that we're struggling to get anything done. Sure, one can expect that PRs are perfect before getting merged, but the current situation shows that this is not working if you want to get something done in a timely manner.
It seems that the result of what you are advocating is to release less tested more buggy versions? Or am I misunderstanding the result?
From another angle, both of you could easily create a development branch. But you didn't so far. Anyway, how is that workflow supposed to work? If lots of PRs go through an intermediate branch then merging that intermediate branch into master is going to be a nightmare too.
Which is also true and another reason its not done that way.
Cheers Lex
Best regards.
Devel mailing list Devel@lists.geany.org https://lists.geany.org/cgi-bin/mailman/listinfo/devel
On Sat, Apr 29, 2017 at 2:58 PM, Lex Trotman elextr@gmail.com wrote:
The vast majority are therefore not testing anything in master prior to release, so they are not helping stabilise the release. Thats no help. (Of course users are not expected to help stabilise the release).
By the way, I think it might be a good idea to call on users for more testing.
Many of them must be technical people who wouldn't be scared by Git and may be interested in improvements. At least on Linux, it's (relatively) easy to build Geany from Git and run it with a copy of one's normal config. So it's easy and safe to try PRs out at least briefly.
I mean, at the moment geany.org doesn't even mention testing in its "Contribute" sections.
On 29 April 2017 at 23:15, Vasiliy Faronov vfaronov@gmail.com wrote:
On Sat, Apr 29, 2017 at 2:58 PM, Lex Trotman elextr@gmail.com wrote:
The vast majority are therefore not testing anything in master prior to release, so they are not helping stabilise the release. Thats no help. (Of course users are not expected to help stabilise the release).
By the way, I think it might be a good idea to call on users for more testing.
Well, it can't hurt :)
Many of them must be technical people who wouldn't be scared by Git and may be interested in improvements. At least on Linux, it's (relatively) easy to build Geany from Git and run it with a copy of one's normal config. So it's easy and safe to try PRs out at least briefly.
Yes, its only a "one-liner" (well it would be one line if the mailer didn't wrap it :) after you have installed the prerequisites using your package manager:
mkdir /some/where/geany; cd /some/where/geany; git clone https://github.com/geany/geany.git; cd geany; ./autogen.sh --prefix=/some/where/geany; make install; cd ../bin; ./geany -c ../config
This keeps everything inside /some/where/geany, so you can delete it all, and it doesn't affect any geany release you have installed in the system location, doesn't need system privileges, doesn't overwrite your home directory config, so you can mess with things to your hearts content. It is preferred that you use a clean config for testing if your normal Geany is an earlier version, but it then won't hurt to copy your normal one to /some/where/geany/config afterwards. Remember config is a directory.
To avoid gitting it (waa waa) you can also use the nightly tarball http::/download.geany.org/geany_git.tar.gz and you don't need the Autotools stuff either.
mkdir /some/where/geany;
Then just use your browser to download and the GUI extractor that your distro has to get the tarball and extract it into git_geany in /some/where/geany. Or your favourite command line tools, but every distro has a browser and an extractor.
cd /some/where/geany/git_geany; ./configure --prefix=/some/where/geany; make install; cd ../bin; ./geany -c ../config
We really should publish it as the basic process for building from git and nightly, and a definitive list of dependencies and tools, the README waffles on about all the GTK deps etc. makes it sound complex but they should all be provided from your package manager, and it doesn't mention libvte or that most tools will be provided by the dev_basics packages, in fact actually doesn't ever cleanly list whats needed.
There is a nightly Deb package built which you could use, but it will overwrite your system version of Geany (IIUC).
I mean, at the moment geany.org doesn't even mention testing in its "Contribute" sections.
Well, a "Call for testers" on the front page would be better still, (in flashing orange and purple striped text -- ok maybe not).
Enrico, any comment?
Cheers Lex
-- Vasiliy _______________________________________________ Devel mailing list Devel@lists.geany.org https://lists.geany.org/cgi-bin/mailman/listinfo/devel
On Sun, Apr 30, 2017 at 2:40 PM, Lex Trotman elextr@gmail.com wrote:
We really should publish it as the basic process for building from git and nightly, and a definitive list of dependencies and tools, the README waffles on about all the GTK deps etc. makes it sound complex but they should all be provided from your package manager, and it doesn't mention libvte or that most tools will be provided by the dev_basics packages, in fact actually doesn't ever cleanly list whats needed.
I started drafting up a tutorial on Geany wiki:
https://wiki.geany.org/howtos/testing_git
Please feel free to reuse and/or improve.
Hi all,
Two points were raised in this thread that I feel might not have received enough attention. I'm going to try and float them once more. Please do tell me if I'm being too persistent.
1. Thomas has offered [1] his help in merging PRs if he is given more GitHub access.
2. Lex has agreed with me [2] that it might be a good idea to try and engage the users more in testing Geany, so as to reduce the risks in merging PRs. To that end, I have drafted up a tutorial [3] which may or may not help.
Any further thoughts/actions on this?
[1] http://lists.geany.org/pipermail/devel/2017-April/010237.html [2] http://lists.geany.org/pipermail/devel/2017-April/010248.html [3] https://wiki.geany.org/howtos/testing_git
On 28.04.2017 23:35, Thomas Martitz wrote:
Unless this situation improves, I'm afraid that intensive testing of PRs is nice but kind of a wasted effort. This is worsened by the fact that "unpreviliged" testers can't assign labels in Github, it's really hard to get an overview about which PRs have received extended testing.
At least for PR at geany-plugins I need to disagree. There is really a lot of testing and fixing small issues found by testing lagging.
Cheers, Frank