@rdipardo
Well, being easy but inefficient allowed Python an PHP to become the institutions they are today. Scintilla has followed that tradition to great success.
I'm not sure I understand how Scintilla follows the "inefficient but easy" tradition, I would have said that writing everything in C++ follows the "difficult but efficient" tradition :smile:
Recognising that words (identifiers/names/whatever your language calls them) can represent several different syntactic constructs, and these tend to change as the language evolves, Scintilla provides the facility for the application (thats Geany) to provide several lists of words and facilities for the lexer to efficiently recognise if/which list a word is in, and members of those lists can be styled differently. Most lexers happily use this facility, but how many lists they support varies from lexer to lexer. This facility is even (mis)used by Geany to supply lists of typenames detected by the ctags parsers/tagfiles dynamically at runtime for some languages (eg C/C++).[^1]
The prolog lexer supports these lists:
``` static const char *const visualPrologWordLists[] = { "Major keywords (class, predicates, ...)", "Minor keywords (if, then, try, ...)", "Directive keywords without the '#' (include, requires, ...)", "Documentation keywords without the '@' (short, detail, ...)", 0, }; ```
I think @techee only provided for two in the filetype file. Maybe they can all be allowed since there is no ctags parser so none need to be reserved for that. Then the lists might be better arranged.
[^1]: lexers run each keystroke so need to be fast and do little and ignore incomplete syntax, just identify the syntactic entities. Parsers need to understand the language to read declarations so they run mostly after a delay on the basis that if the meatware has stopped typing they will likely be thinking for a while, and so a parse delay is less likely to be noticed, and the code is also more likely to be legal enough to parse.