Chris Mellon wrote:
wx does (in large part), but most likely the problem is that the "rich
text" control used in most editors is not the win32 rich text control,
but instead Scintilla, which is designed for source editing and is
much easier to use. Very few editors, of any kind, use the native
win32 text control for source highlighting.
wx does have (some) support for the accessibility features in win32,
you might post a feature request on the wx bug tracker to add them to
the wx platform bindings for Scintilla.
The main reason editors don't use the standard control is for syntax
highlighting and perhaps folding and margins, though, which I'm not
sure are especially valuable to you. What kind of features makes a
Python editor 'smart' for someone who's coding with a screen reader?
what you said makes wonderful scents. Thank you for explaining. Now, if you
read the application note, after the list of rich text objects they expect, they
described a fuzzy in between state where if you tweak the configuration, you get
Select-and-Say but you know a whole bunch of things about the objects etc. would
you be so kind as to read over that part of the application note and tell me if
it applies to any of the Scintilla objects for text display?
I'm glad to hear you have some of the extensibility features. Some are better
than none. There is an event coming up in the next few weeks that will trigger
a need for accessibility interfaces on the Linux side.
Well, I'm not sure about a screen reader but I'm using speech to text. My
apologies if I wasn't clear (reading back, I see a couple dropouts that I didn't
catch and change the meaning significantly. As a brief aside, one of the classic
problems is can versus can't. Which leaves you very much in doubt if someone
writes "I can go to bed with you." Is it a misrecognition or an expression of
desire? only your natural language processing will know for sure:-)
there's a whole hierarchy of needs. The voice coder project has done some nice
work in that domain. For me, the fundamental level is the ability to correct
and replace without error. I should be up to select a phrase or set of words
and have that region highlighted properly so I can operate on it. I should be
able to delete the last utterance and not have it count wrong.
the next level (that the voice coder project handles) is the ability to dictate
a word and depending on the context or its knowledge of symbols, generate a code
symbol be a bumpy cap or joined with underscores or some combination thereof.
Additionally, David Fox, primary developer, did some absolutely gorgeous
worked with correction mechanisms.
The next level up would be contextual awareness of variables within your scope
so that when you dictate something, you don't have to have a big collection of
static symbols. You create them dynamically based on where you are located in
your code, which file, and what modules you include. This last one is going to
be difficult. May not show up for a few years.
But all I'm asking for right now is simple Select-and-Say as outlined in that
application note from nuance. I can say "_" with English words on either side
and the code will work.