This article is more than 1 year old

AI models to detect how you're feeling in sales calls

Plus: Driverless Cruise car gets pulled over by police, and more

In brief AI software is being offered to sales teams to analyze whether potential customers appear interested during virtual meetings.

Sentiment analysis is often used in machine-learning research to detect emotions in underlying text or video, and the technology is now being applied to help people see how possible future clients are feeling in sales pitches to improve results, Protocol reported this month.

The COVID-19 pandemic has moved a lot of meetings virtually as employees work from home. "It's very hard to build rapport in a relationship in that type of environment," said Tim Harris, director of product marketing at Uniphore, a software company specializing in conversational analytics.

The hope is that sellers may be able to use AI technology to automatically tell when they're boring clients and can immediately change tactics, such as being more empathetic to keep them interested. In addition, reactions to individual products could be included, so that vendors are aware of what Harris calls "emotional state of a deal."

Zoom is reportedly going to add sentiment analysis to analyze conversations retroactively so people can see how they did on their last call. The idea that AI can accurately detect human emotions, however, has been repeatedly challenged by experts. This may be one part of life that can be left to humans.

Puzzled police flag down driverless car

Police pulled over a Cruise self-driving car in San Francisco, and when they walked up to the vehicle, it was completely empty. Shortly afterwards the remote vehicle moved off, crossed a street, and then parked with its hazard lights on.

Officers appear puzzled, milling around the car while passersby erupted in laughter. You can watch a video recording of the situation here.

The car was flagged down because it was driving around at night without lights on, a Cruise spokesperson confirmed. "The vehicle yielded to the police car, then pulled over to the nearest safe location for the traffic stop," the spokesperson told The Verge earlier this month.

"An officer contacted Cruise personnel and no citation was issued. We work closely with the SFPD on how to interact with our vehicles and have a dedicated phone number for them to call in situations like this."

In October Cruise got the green light to operate between 2200 and 0600 PT in the US city; it's not clear why the vehicle was driving without its lights on. Cruise says it has fixed the issue.

Small Chinchilla language model from DeepMind reportedly trumps bigger systems

Language models with hundreds of billions of parameters are all the rage right now, though engineers don't have to make them so large to see strong performance.

DeepMind's Chinchilla, a model with a modest size of 70 billion parameters, apparently outperformed many larger systems including its own Gopher, OpenAI's GPT-3, AI21 Labs' Jurassic-1, and Nvidia and Microsoft's Megatron-Turing NLG on a numerous natural-language processing tasks.

Instead of making language models bigger, engineers should put more effort into training them on more data. The compute needed to train models this way may not differ too much from larger systems, and the effect is noticeable in the inference stage. Smaller models are cheaper to deploy and run.

"We find that for compute-optimal training, the model size and the training dataset size should be scaled equally: for every doubling of model size the training dataset size should also be doubled," DeepMind researchers said in a blog post last week.

In other words, large models are currently undertrained. If they're trained on more data and the same amount of compute, they can be made smaller and maintain the same performance.

US President's AI advice panel grows

A panel of 27 experts from academia, industry, and non-profit organizations have been selected to serve on the United States' National Artificial Intelligence Advisory Committee (NAIAC).

The NAIAC will advise President Biden on all policies related to AI, ranging from how the technology affects national security to civil rights. "Artificial intelligence presents a new frontier for enhancing our economic and national security, as well as our way of life," Don Graves, deputy secretary of Commerce, said in a statement. 

"Moreover, responsible AI development is instrumental to our strategic competition with China. At the same time, we must remain steadfast in mitigating the risks associated with this emerging technology, and others, while ensuring that all Americans can benefit."

The NAIAC has been tasked with setting up a subcommittee to probe the use of AI in law enforcement; members have been asked to pay close attention to issues of bias, security, and privacy. They will convene for their first public meeting on May 4.

The committee is made up of representatives from big companies, including Google, IBM, Microsoft, Nvidia, as well as top universities such as Stanford and Carnegie Mellon.

We're pleased to note that one member of the NAIAC is Jack Clark, a former journalist in The Register's San Francisco bureau and reporter for Bloomberg. After four years with OpenAI, he just lately co-founded AI safety and research startup Anthropic. ®

More about

TIP US OFF

Send us news


Other stories you might like