Release Speed reader


@dimag0g : Could you make a version of your ORP engine that outputs the words in a format that is easy to use for the renderer? e.g. printf("%f,%i,%s\n", offset, delay, word); would be good.
Also, I'm assuming that the text is in UTF8 format.
Ok, I'm on it. Hopefully I will be able to give you something by tomorrow or the day after. Checking that every conversion tool is encoding text in UTF-8 may need some testing and tuning though.
 
If you're still looking for a pre-existing library for or example of word splitting then you could look at TeX (the core technology behind LaTeX). Not to get back into the question of how to render the preview, it could still have uses for the speed reader aspect.
 
If you're still looking for a pre-existing library for or example of word splitting then you could look at TeX (the core technology behind LaTeX). Not to get back into the question of how to render the preview, it could still have uses for the speed reader aspect.
Good point. Although hyphenation is not exactly what we need (we have to break words into smaller pieces for reading, while hyphenation breaks words into pieces for writing), it's probably a good start.
 
If you're still looking for a pre-existing library for or example of word splitting then you could look at TeX (the core technology behind LaTeX). Not to get back into the question of how to render the preview, it could still have uses for the speed reader aspect.
Does it work with Japanese? I'd hate to code that myself :)
 
I have updated the PND attached to the topic. Now it should look for an executable called render_sdl in its appdata directory and use it if possible. render_sdl should read its stdin and accept lines in the format we agreed on.

I also have included xclip in the PND. It should work with plain text (2+ lines) file names as provided by the file manager and hyperlinks.
 
Here is a simple render_sdl. Press F for full screen, space to pause, Q to quit. The source should be readable enough I hope. It's GPL v3. Can you take it from there to add more features?

Navigation probably requires some changes to the engine output stream; perhaps a tighter coupling of the engine and the renderer will be needed after all.
 

Attachments

  • render.tar.gz
    11.2 KB · Views: 169
Last edited by a moderator:
Thanks a lot, _wb_! I'll try to produce a beta version in the following days, with a bare minimum of features. Something I wouldn't be ashamed to put in the repo.
 
Interesting discussion on HN on what does not work sometimes with these high speed readers:

https://news.ycombinator.com/item?id=7385634

-> need adaptative behavior: words that may be very long need some additional time to be recognized

-> unusual words may need more time as well -> technical / medical / scientific words need time to be understood.

So a constant rate of reading may not be 100% appropriate and a variable rate should be considered for such an application. That means the software needs to be able to sort words in some way.
 
Word lenght is already taken into account by adding a few ms for each additionnal letter. Reading speed in wpm is, and always was, approximative.

I hope to handle usual/unusual words one day, as well as typical word prefixes/suffixes (words with common prefixes like auto- have ORP shifted to the right, while common suffixes like -tion tend to shift ORP to the left). Also, usual prefixes / suffixes / words need less time for recognition.

However, language detection code is required at this stage, as well as dictionnaries for each supported language. I'm not sure I'll have enough time in the foreseeable future for such kind of features.
 
Giving it a second thought, I may be able to use the text itself to assess frequent words. So the first time the word "making" is displayed, it gets as much time as any 6-letter word would. The second time, it gets displayed for a little shorter time, etc.

Still, a dictionnary-based solution is superior.
 
Giving it a second thought, I may be able to use the text itself to assess frequent words. So the first time the word "making" is displayed, it gets as much time as any 6-letter word would. The second time, it gets displayed for a little shorter time, etc.

Still, a dictionnary-based solution is superior.
You can make the word counts persistent over multiple runs and/or pre-train them on a corpus of reference material.

To find common prefixes/postfixes you could use a similar approach.

Another method could be to use some statistics on n-grams (http://en.wikipedia.org/wiki/N-gram) to help guide the heuristic -- e.g. remove common n-grams from both ends of the word before you compute its "length" and the position of the ORP.

Or if you can find a list of words with their ORP, you could use a Hidden Markov Model (http://en.wikipedia.org/wiki/Hidden_Markov_model) or something like that to learn a model from that and generalize to arbitrary words.

Of course all of this is language-dependent, but there is no way around that. If it can be automated, it's good enough -- you can let the user provide a reference corpus or just learn on the fly.
 
<edit>some posts between the one I was looking at when writing this an the actual post - replying to this

Interesting discussion on HN on what does not work sometimes with these high speed readers:

https://news.ycombinator.com/item?id=7385634

-> need adaptative behavior: words that may be very long need some additional time to be recognized

-> unusual words may need more time as well -> technical / medical / scientific words need time to be understood.

So a constant rate of reading may not be 100% appropriate and a variable rate should be considered for such an application. That means the software needs to be able to sort words in some way.
but most stuff was already addressed.</edit>
Certainly true about the recognition time for unusual words - but if you still have some upwards room with the usual words, then you also have some time to spend recognizing that word after it has already vanished again (after all your brain is perfectly capable of saving the image of that word for some split seconds :) ). Only if you want to go to the absolute maximum then that's relevant - but then there is the problem that recognition time varies between people - people with a background in physics won't have a problem with any unusual physical terms, linguistic people will understand linguistic ones faster, etc.
 
Last edited by a moderator:
people with a background in physics won't have a problem with any unusual physical terms, linguistic people will understand linguistic ones faster, etc.
This is there adaptative approaches help. If I manage to program a persistent word count database, then you will only go slowly throug your first physics article. On the second one, speedreader will already know those words are known to you.
 
Last edited by a moderator:
Back
Top