I'm doing some experiments with Firefly and I hit upon an idea. I'm going to brainstorm and throw myself at it regardless but here's the basic premise of it.
You have a sprite sheet / texture map for different mouth positions; a phoneme chart essentially.
A sequence of events changes the display position on that sprite sheet, the end result is a talking animation done by shifting the display position on the sheet/map.
Phoneme Map:
00 - At Rest
01 - AI
02 - Oh
03 - E,R
04 - U
05 - D,L
06 - W/Q
07 - M/B/P
08 - F/V
09 - Untargetted (fail-safe if there's no properly targeted phoneme but something that isn't 'at rest' needs to be displayed.
Theory; A string contains "Hello World"
Based on an event sequence as soon as that string is made visible as an example; the following textures positions 'play' in sequence; 03*,05*,02,06 [delay] 06, 02, 03, 05* *-displayed for slightly longer. End result is the character looks like they 'mouthed' hello world'
The basic idea is to have basic mesh 3d characters with all the expressive animation occurring on texture maps. I already know that texture maps can be used to 'cheat' animation on characters, sort of like how old PSP RPGs like 'Legend of Heroes' did it. I'm just not sure about an optimal approach for connecting it to the content of a string or text display object of some kind. I suspect this is doable, I'm just not sure at this point how to make different 'frames' of the texture sheet display conditionally based on the content of a string or some other dialogue display method.
What do you guys think of the whole concept?