Google voice recognition could transcribe doctor visits

Doctors work long hours, and a disturbingly large part of that is documenting patient visits — one study indicates that they spend 6 hours of an 11-hour day making sure their records are up to snuff. But how do you streamline that work without hiring an army of note takers? Google Brain and Stanford think voice recognition is the answer. They recently partnered on a study that used automatic speech recognition (similar to what you’d find in Google Assistant or Google Translate) to transcribe both doctors and patients during a session. The approach can not only distinguish the voices in the room, but also the subjects. It’s broad enough to both account for a sophisticated medical diagnosis and small talk like the weather. Doctors could have all the vital information they need for follow-ups and a better connection to their patients. The system is far from perfect. The best voice recognition system in the study still had an error rate of 18.3 percent. That’s good enough to be practical, according to the researchers, but it’s not flawless. There’s also the matter of making sure that any automated transcripts are truly private and secure. Patients in the study volunteered for recordings and will have their identifying information scrubbed out, but this would need to be highly streamlined (both through consent policies and automation) for it to be effective on a large scale. If voice recognition does find its way into doctors’ offices, though, it could dramatically increase the effectiveness of doctors. They could spend more time attending patients and less time with the overhead necessary to account for each visit. Ideally, this will also lead to doctors working more reasonable hours — they won’t burn out and risk affecting their judgment through fatigue. Via: 9to5Google Source: Google Research Blog , ArXiv.org

See the original article here:
Google voice recognition could transcribe doctor visits

Google’s new algorithm shrinks JPEG files by 35 percent

For obvious reasons, Google has a vested interest in reducing the time it takes to load websites and services. One method is reducing the file size of images on the internet, which they previously pulled off with the WebP format back in 2014, which shrunk photos by 10 percent. Their latest development in this vein is Guetzli , an open-source algorithm that encodes JPEGs that are 35 percent smaller than currently-produced images. As Google points out in its blog post, this reduction method is similar to their Zopfli algorithm that shrinks PNG and gzip files without needing to create a new format. RNN-based image compression like WebP, on the other hand, requires both client and ecosystem to change to see gains at internet scale. If you want to get technical, Guetzli (Swiss German for “cookie”) targets the quantization stage of image compression, wherein it trades visual quality for a smaller file size. Its particular psychovisual model (yes, that’s a thing ) “approximates color perception and visual masking in a more thorough and detailed way than what is achievable” in current methods. The only tradeoff: Guetzli takes a little longer to run than compression options like libjpeg. Despite the increased time, Google’s post assures that human raters preferred the images churned out by Guetzli. Per the example below, the uncompressed image is on the left, libjpeg-shrunk in the center and Guetzli-treated on the right. Source: Google Research Blog

Read this article:
Google’s new algorithm shrinks JPEG files by 35 percent