Kinect will not only identify body movements, detect our faces, and identify our voices, it’ll also understand sign language. It’s pretty intelligent, but we may not look it when we play on it. But man, do I want to show that ducking piece of motion sensing hardware my middle finger.
The info file says Kinect will be able to identify ASL (American Sign Language), or if not that, then lip movements, perhaps.
Here’s what it says:
“Where the user is unable to speak, he may be prevented from joining in the voice chat. Even though he would be able to type input, this may be a laborious and slow process to someone fluent in ASL. Under the present system, he could make ASL gestures to convey his thoughts, which would then be transmitted to the other users for auditory display. The user’s input could be converted to voice locally, or by each remote computer.”
It goes on: “In this situation, for example, when the user kills another user’s character, that victorious, though speechless, user would be able to tell the other user that he had been ‘PWNED’. In another embodiment, a user may be able to speak or make the facial motions corresponding to speaking words. The system may then parse those facial motions to determine the user’s intended words and process them according to the context under which they were inputted to the system.”
“Within the skeletal mapping system a variety of joints and bones are identified: each hand, each forearm, each elbow, each bicep, each shoulder, each hip, each thigh, each knee, each foreleg, each foot, the head, the torso, the top and bottom of the spine, and the waist. Where more points are tracked, additional features may be identified, such as the bones and joints of the fingers or toes, or individual features of the face, such as the nose and eyes.”
More on this matter as it builds.