Future Cars Will Be Language Driven
The Internet, Apps or a sound system: the number of functions on the dashboards and steering wheels of vehicles is growing steadily and often overwhelm the driver. To make sure that the driver can concentrate on driving many manufacturers rely on the driver’s voice as a method of operation. Mercedes, for example, draws on Apples iPhone 4S and Siri, and Ford works on their own solution called “Nuance”.
Artificial intelligence as in the TV show “Knight Rider” has not been reached yet but rudimentary driver-to-vehicle communication is already possible. Ford started research on voice control years ago with their Sync-solution. Now other companies have followed and also offer voice operated systems. At the Geneva car show Mercedes, for example, introduced a new entertainment system that is accessing Apple’s Siri in its A-class.
Siri is taking language onto centre stage
“Apple has affected the market a lot here. Language has been put back into peoples’ minds.” Brigitte Richardson, speech systems lead engineer of Ford says to futurezone. The US- manufacturer has been offering car voice control since 2007 and is a forerunner in that area. As with Apple, Ford is cooperating with the leading provider, Nuance. Since the hype about the iPhone 4S and Siri the concern mentioned an increased demand on such technology. According to Richardson it has been questioned actively more often. Those 4 million drivers in whose cars it has already been implemented are now exploring it.
Higher security and concentration
One of the reasons for the increased use of language is the growing number of functions on the dashboard and steering wheel. Before all the integration of e-mail, apps such as Facebook and Twitter or the Internet, make new operational concepts necessary. Drivers want to use it in their cars but manufacturers have to make sure they keep the distraction for drivers as minimal as possible, Richardson explains. For texts and messages one relies on voice response, for control on voice-commands.
“It is a very natural way of operating and much more secure than typing with your fingers“, Richardson says. You do not have to explain that much, users get used to it quite quickly. To make it work faultlessly, Ford had to collect a lot of know how. “We often work with synonyms. Also dialects have to be considered to a certain extent”, Richardson says. This turned out to be the biggest challenge as with all current language control systems. “If you have to pronounce precisely, this means more effort and loss of concentration from the driver’s side, which is counterproductive”.
Only a few words for complex functions
The degree of accuracy can be limited quite well, as drivers talk very purposefully to the system. “Users want to have a system that understands natural language, but in reality there is no conversation going on. Drivers give short and incisive commands”, Richardson says.
Ford’s investigation into the use of language came up with no surprises. Above all the function is used for putting in addresses. Also the search for songs or radio stations is preferably done by voice. “Everything that could be done faster by using buttons or a touch screen will be done by voice”, said Richardson.
Volume control remains as now by turning a button. Air conditioning temperature on the other hand is chosen by a direct voice command. It came out during the investigation that voice is only a supplement. “Classical buttons our touch-screens will stay.”
Better chips for faster voice recognition
The reason that language will be used more in cars – Ford estimates a number of 3.5 million sync-vehicles – depends on better technology. For language operation you need small and powerful chips, which will soon be available. Intel wants to make an effort in this area. Cars with “Intel inside” will come onto the market for the first time this year. (BMW 7, Mercedes C/S). “We have done intense research into input methods and have many usage studies”, Stacy Palmer says to futuerzone. She is responsible for the automotive unit at Intel. According to Palmer, Intel is thinking about how to support gestures or language input with their processors. With all this digital assistance, the entertainment system and the cameras and sensors, it needs a lot of computing power. “In the future language decryption will be added, which has to be taken into consideration in the architecture”, said the manager.
The car and the cloud
Another reason for the prevalence of voice commands in cars is the wish for internet access. Mercedes “Commonad Online”, which is able to operate language command, is permanently connected to the internet. Hyundai’s Teleinformatic - and entertainment solution “Blue Link” also offers language recognition and is online permanently. In Ford cars there is a permanent database (more than 10000 expressions). If required, as with Siri, access to the cloud is possible.
On the overtaking lane with LTE
According to Richardson research is moving towards this direction, as “always online”- through the focus on teleinformatics is establishing. “In 2016 all new cars will be online”, Tim Nixon from GM says to futerezone. Due to permanent internet connection cloud services or outsourced calculations will be possible.
BMW with Telefonica and Audi with Alcatel are testing LTE in cars independently from each other. With this radio technology, fast transmission (up to 70 Mbit/s) should be possible even while driving at high speed on motorways. GM is also testing a modem based on LTE, which is designed for teleinformatics and entertainment. “We only work with 4G in development”, Mitchell Zarders from Kia says. According to the researcher augmented-reality-solutions should be possible based on that standard and the fast connection.
[ Google+ ]
[ Facebook ]