Researchers report >4,000 apps that secretly record audio and steal logs

(credit: Ron Amadeo) A single threat actor has aggressively bombarded Android users with more than 4,000 spyware apps since February, and in at least three cases the actor snuck the apps into Google’s official Play Market, security researchers said Thursday. Soniac was one of the three apps that made its way into Google Play , according to a blog post published Thursday by a researcher from mobile security firm Lookout. The app, which had from 1,000 to 5,000 downloads before Google removed it, provided messaging functions through a customized version of the Telegram communications program. Behind the scenes, Soniac had the ability to surreptitiously record audio, take phones, make calls, send text messages, and retrieve logs, contacts, and information about Wi-Fi access points. Google ejected the app after Lookout reported it as malicious. Two other apps—one called Hulk Messenger and the other Troy Chat—were also available in Play but were later removed. It’s not clear if the developer withdrew the apps or if Google expelled them after discovering their spying capabilities. The remaining apps—which since February number slightly more than 4,000—are being distributed through other channels that weren’t immediately clear. Lookout researcher Michael Flossman said those channels may include alternative markets or targeted text messages that include a download link. The apps are all part of a malware family Lookout calls SonicSpy. Read 4 remaining paragraphs | Comments

Taken from:
Researchers report >4,000 apps that secretly record audio and steal logs

AI film editor can cut scenes in seconds to suit your style

AI has won at Go and done a few other cool things, but so far it’s been mighty unimpressive at harder tasks like customers service , Twitter engagement and script writing . However, a new algorithm from researchers at Stanford and Adobe has shown it’s pretty damn good at video dialogue editing, something that requires artistry, skill and considerable time. The bot not only removes the drudgery, but can edit clips using multiple film styles to suit the project. First of all, the system can organize “takes” and match them to lines of dialogue from the script. It can also do voice, face and emotion recognition to encode the type of shot, intensity of the actor’s feelings, camera framing and other things. Since directors can shoot up to 10 takes per scene (or way more , in the case of auteurs like Stanley Kubrick), that alone can save hours. However, the real power of the system is doing “idiom” editing based on the rules of film language. For instance, many scenes start with a wide “establishing” shot so that the viewer knows where they are. You can also use leisurely or fast pacing, emphasize a certain character, intensify emotions or keep shot types (like wide or closeup) consistent. Such idioms are generally used to best tell the story in the way the director intended. All the editor has to do is drop their preferred idioms into the system, and it will cut the scene to match automatically, following the script. In an example shown (below), the team selected “start wide” to establish the scene, “avoid jump cuts” for a cinematic (non-YouTube) style, “emphasize character” (“Stacey”) and use a faster-paced performance. The system instantly created a cut that was pretty darn watchable, closely hewing to the comedic style that the script was going for. The team then shuffled the idioms, and it generated a “YouTube” style that emphasized hyperactive pacing and jump cuts. What’s best (or worst, perhaps for professional editors) is that the algorithm was able to assemble the 71-second cut within two to three seconds and switch to a completely different style instantly. Meanwhile, it took an editor three hours to cut the same sequence by hand, counting the time it took to watch each take. The system only works for dialogue, and not action or other types of sequences. It also has no way to judge the quality of the performance, naturalism and emotional beats in take. Editors, producers and directors still have to examine all the video that was shot, so AI is not going to take those jobs away anytime soon. However it looks like it’s about ready to replace the assistant editors who organize all the materials, or at least do a good chunk of their work. More importantly, it could remove a lot of the slogging normally required to edit, and let an editor see some quick cuts based on different styles. That would leave more time for fine-tuning, where their skill and artistic talent are most crucial. Source: Stanford

See original article:
AI film editor can cut scenes in seconds to suit your style

Disney’s projection tech turns actors’ faces into nightmare fuel

Disney is taking scary clown makeup to the next level. It’s using a new projection system to transform the appearance of actors during live performances, tracking facial expressions and “painting” them with light, rather than physical makeup. Called Makeup Lamps, the system was developed by a team at Disney Research, and it could potentially change the way stage makeup is used in future theater productions. Makeup Lamps tracks an actor’s movements without using the facial markers common in motion capture, then it displays any color or texture the actor wants by adjusting the lighting. It can make someone appear older by creating “wrinkles” on their face, for example, or it can paint their face in creepy clown makeup, à la Heath Ledger in The Dark Knight . And all of it is done in real-time. A similar technology was used earlier this year during Lady Gaga’s performance at the Superbowl. Nobumichi Asai, creative director of Japanese visual studio WOW, was brought in to create a red lightning bolt on Gaga’s face during her David Bowie tribute. The attention that performance received has helped the technology become more mainstream. Latency — the time between generating an image that matches the actor’s pose and when the image is displayed — is a big challenge to live augmentation, of course. Large amounts of it will cause the projection and the actor’s face to appear out of sync. Disney’s research team combated this problem by limiting the complexity of its algorithms and employing a method called Kalman filtering, which uses measurements over time to make predictions and minor adjustments. “We’ve seen astounding advances in recent years in capturing facial performances of actors and transferring those expressions to virtual characters, ” said Markus Gross, vice president at Disney Research. “Leveraging these technologies to augment the appearance of live actors is the next step and could result in amazing transformations before our eyes of stage actors in theaters or other venues.” Source: EurekAlert

See the original article here:
Disney’s projection tech turns actors’ faces into nightmare fuel

Adobe Is Working On ‘Photoshop For Audio’ That Will Let You Add Words Someone Never Said

An anonymous reader quotes a report from The Verge: Adobe is working on a new piece of software that would act like a Photoshop for audio, according to Adobe developer Zeyu Jin, who spoke at the Adobe MAX conference in San Diego, California today. The software is codenamed Project VoCo, and it’s not clear at this time when it will materialize as a commercial product. The standout feature, however, is the ability to add words not originally found in the audio file. Like Photoshop, Project VoCo is designed to be a state-of-the-art audio editing application. Beyond your standard speech editing and noise cancellation features, Project VoCo can also apparently generate new words using a speaker’s recorded voice. Essentially, the software can understand the makeup of a person’s voice and replicate it, so long as there’s about 20 minutes of recorded speech. In Jin’s demo, the developer showcased how Project VoCo let him add a word to a sentence in a near-perfect replication of the speaker, according to Creative Bloq. So similar to how Photoshop ushered in a new era of editing and image creation, this tool could transform how audio engineers work with sound, polish clips, and clean up recordings and podcasts. “When recording voiceovers, dialog, and narration, people would often like to change or insert a word or a few words due to either a mistake they made or simply because they would like to change part of the narrative, ” reads an official Adobe statement. “We have developed a technology called Project VoCo in which you can simply type in the word or words that you would like to change or insert into the voiceover. The algorithm does the rest and makes it sound like the original speaker said those words.” Read more of this story at Slashdot.

More:
Adobe Is Working On ‘Photoshop For Audio’ That Will Let You Add Words Someone Never Said

Slack to start integrating native voice chat into its app

A couple of months ago, you could start making Skype calls from within Slack , an award-winning work chat app that’s pretty popular with a lot of companies (we certainly use it over here in the Engadget office). Now, however, voice calls are simply baked into the app itself, without you having to use an external service. The feature is in beta right now, and testing will roll out in Slack’s desktop apps as well as in Chrome. The voice calling feature actually comes from Slack’s acquisition of Screenhero over a year ago. If you have it, you’ll spot a phone icon at the top of your screen next to the info button. Click it and you can initiate a voice call much like most other chat apps out there. This doesn’t work with just individual folks either; you can also make channel-wide calls with up to 15 folks, but this is only for Slack users who pay for the service. And because this is Slack — known for its wide range of emoji — you can also respond to voice chats with one of several colorful reactions imposed over your user icon. This isn’t to say that Slack will stop supporting the aforementioned Skype or other voice chat services; it’s just another option. We should also note that rival Hipchat has had voice and video chat for awhile now. Still, for loyal Slack users, this is great news; here’s hoping that video support will be coming too. Via: The Verge Source: Slack

Taken from:
Slack to start integrating native voice chat into its app

Disney’s FaceDirector changes facial expressions in movies

The new tool out of Disney Research’s labs could turn an ingénue’s semi-decent attempt into a finely nuanced performance. This software called FaceDirector has the capability to merge together separate frames from different takes to create the perfect scene. It does that by analyzing both the actor’s face and audio cues to identify the frames that correspond with each other. As such, directors can create brand new takes during post-production with zero input from the actor. They don’t even need specialized hardware like 3D cameras for the trick — it works even with footage taken by regular 2D cams. According to Disney Research VP Markus Gross, the tool could be used to lower a movie’s production costs or to stay within the budget, say, if it’s an indie film that doesn’t have a lot of money to spare. “It’s not unheard of for a director to re-shoot a crucial scene dozens of times, even 100 or more times, until satisfied, ” he said. “That not only takes a lot of time — it also can be quite expensive. Now our research team has shown that a director can exert control over an actor’s performance after the shoot with just a few takes, saving both time and money.” Considering the lab also developed a way to make dubbed movies more believable and to take advantage of incredibly high frame rates , we wouldn’t be surprised if filmmakers arm themselves with an arsenal of Disney Research tools in the future. It’s probably hard to visualize the way FaceDirector works without seeing an example, so make sure to watch the video below to see it in action. Source: Disney Research (1) , (2)

Originally posted here:
Disney’s FaceDirector changes facial expressions in movies

Disney’s Super-Realistic CG Eyeballs Are an Uncanny Valley Airlift

What most often gives away a CG character as fake is their dead, lifeless eyes. It’s a common contributing factor to the uncanny valley effect, but now researchers at Disney have developed a system to perfectly capture a performer’s eyes that promises to make CG characters finally appear more lifelike and convincing. Read more…

Read more here:
Disney’s Super-Realistic CG Eyeballs Are an Uncanny Valley Airlift

Fire TV: Everything You Need to Know About Amazon’s Streaming Box

Amazon has kicked off its arrival to the streaming party with the announcement of new device called Fire TV to satisfy all your TV watching needs today in a popcorn-scented New York event. Here’s everything you need to know about it: Read more…        

Read this article:
Fire TV: Everything You Need to Know About Amazon’s Streaming Box

The complete map to Earth’s deepest cave—7,208 feet deep, 8 miles long

At 2, 197 meters (7, 208 feet) the Krubera cave is the deepest on Earth. Located in the Arabika Massif, of the Western Caucasus in Abkhazia, Georgia, it extends for 13, 432 kilometers (8, 346 miles.) I would love to get inside, but I know the fear would paralyze me. I love to go through its complete (so far) map, though. Read more…        

Read this article:
The complete map to Earth’s deepest cave—7,208 feet deep, 8 miles long

Vin Diesel announces a fourth Riddick movie is coming

The third installment of the Riddick franchise may have turned our favorite antihero into a peeping tom, but that didn’t stop it from being a huge hit on DVD. And because of that, Riddick is getting another movie! Read more…        

Read the article:
Vin Diesel announces a fourth Riddick movie is coming