Alexa will be able to recognize moods

22

October

2018

No ratings yet.

Amazon has patented a technology by which a speech analysis system can recognize a sick person. In addition to the cold, which is proposed to determine by coughing and hoarseness, the system will also be able to recognize the user’s mood. It is assumed that the company will use this technology in its voice assistant Alexa.

Amazon introduced Alexa Voice Assistant in 2014. In addition to the device Amazon Echo, for which the assistant was originally developed, the company uses it in a variety of devices, from microwaves to cars. The company’s speech analysis technology is constantly being refined: for example, a year ago the voice assistant learned how to give personalized answers to different users, and recently an amateur developer taught Alexa how to respond to sign requests.

Now the company has patented the diagnosis of diseases by the user’s voice. The patent, which was approved on October 9, indicated that speech recognition technology for such diagnostics will be used in Amazon devices (for example, in the same Echo column). Of course, such a diagnosis will not replace the medical one: analyzing the temporal and spatial parameters of speech, as well as the change in voice due to cough and sore throat, Alexa can, for example, check with the user if he is not sick and offer to order medicines for him. With regard to the analysis of emotions, in this case, the system will have access not only to voice responses, but also to the user’s search history: the analysis will be supplemented with information on recent actions on the network. With this, the system will be able to determine, for example, that a person is sad or bored, and ask how he is doing and what he would like to do; also a voice assistant can suggest watching a movie. In the future, mood analysis can also be used to diagnose mental disorders, but the patent does not elaborate on this in detail.

 

Sources:

https://arstechnica.com/gadgets/2018/10/amazon-patents-alexa-tech-to-tell-if-youre-sick-depressed-and-sell-you-meds/

http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=10,096,319&OS=10,096,319&RS=10,096,319

Please rate this

Speech recognition without actual voice?

21

October

2018

No ratings yet.

Chinese developers have created an application for a smartphone that recognizes silent speech by the movements of the user’s lips and turns commands into actions on the device, for example, it can run other applications. Unlike ordinary voice assistants, the application can be used in public places without interfering with other people, say the developers.

Almost all modern smartphones are equipped with voice assistants that recognize and execute user commands. In recent years, developers have been able to bring the level of accuracy of speech recognition algorithms to the level of typing specialists, as well as to teach assistants to maintain a dialogue by remembering the context of previous commands. However, studies show that most people do not use voice assistants in public places because they feel uncomfortable.

Yuanchun Shi and his colleagues at Tsinghua University have developed a voice assistant for smartphones that can recognize speech through lip movements, even if the user does not make sounds.

During operation, the application determines the face in the frame from the camera of the smartphone and then starts tracking the position of the 20 control points that accurately describe the shape of the lips. In addition, it determines the degree of openness of the mouth, which allows you to track the moments of the beginning and end of the team. After that, the data is transmitted to another algorithm based on a convolutional neural network, which directly deals with speech recognition by lip movements. It is worth noting that while the developers have implemented recognition not on the smartphone itself, but on an additional and quite powerful computer.

The authors of the application developed 44 commands for it, some of which relate to the entire system, for example, turning on Wi-Fi, part to specific applications, and another part allows you to interact with any application using system services, for example, to select text. At the same time, the application understands the context of commands, for example, if the system displayed a pop-up window with a message, then the user will be able to quickly respond to it.

Sources:

Sun, K., Yu, C., Shi, W., Liu, L. and Shi, Y., 2018, October. Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands. In The 31st Annual ACM Symposium on User Interface Software and Technology (pp. 581-593). ACM.

Please rate this