Adobe’s ‘Cloak’ experiment is a content-aware eraser for video

Glamorous show-reels from shows like Game of Thrones get all the fame, but a lot of VFX work is mundane stuff like removing cars , power lines and people from shots. Adobe’s research team is working on making all of that easier for anyone, regardless of budget, thanks to a project called “Cloak.” It’s much the same as ” content-aware fill” for Photoshop, letting you select and then delete unwanted elements, with the software intelligently filling in the background. Cloak does the same thing to moving video, though, which is a significantly bigger challenge. Engadget got an early look at the tech, including a video demonstration and chance to talk with Adobe research engineer Geoffrey Oxholm and Victoria Nece, product manager for video graphics and VFX. At the moment, the technology is in the experimental stages, with no set plans to implement it. However, Adobe likes to give the public ” Sneaks ” at some of its projects as a way to generate interest and market features internally to teams. An example of that would be last year’s slightly alarming “VoCo” tech that lets you Photoshop voiceovers or podcasts. That has yet to make it into a product, but one that did is “Smartpic” which eventually became part of Adobe’s Experience Manager. The “Cloak” tech wouldn’t just benefit Hollywood — it could be useful to every video producer. You could make a freeway look empty by removing all the cars, cut out people to get a pristine nature shot, or delete, say, your drunk uncle from a wedding shot. Another fun example: When I worked as a compositer in another life , I had to replace the potato salad in a shot with macaroni, which was a highly tedious process. Object removal will also be indispensable for VR, AR, and other types of new video tech. “With 360 degree video, the removal of objects, the crew and the camera rig becomes virtually mandatory, ” Nece told Engadget. Content-aware fill on photos is no easy task in the first place, because the computer has to figure out what was behind the deleted object based on the pixels around it. Video increases the degree of difficulty, because you have to track any moving objects you want to erase. On top of that, the fill has to look the same from frame to frame or it will be a glitchy mess. “It’s a fascinating problem, ” Oxholm said. “Everything is moving, so even if you nail one frame, you have to be consistent.” Luckily, video does have one advantage over photos. “The saving grace is that we can see behind the thing we want to remove, ” says Oxholm. “If you’ve got a microphone to remove, you can see behind the microphone.” In other words, if you’re doing shot of a church with a pole in the way, there’s a good chance you have a different angle with a clean view of the church. With 360 degree video, the removal of objects, the crew and the camera rig becomes virtually mandatory. Another thing making content-aware fill for video much more feasible now is the fact that motion-tracking technology has become so good. “We can do really dense tracking, using parts of the scene as they become visible, ” said Oxholm. “That gives you something you can use to fill in.” The results so far, as shown in the video above, are quite promising. The system was able to erase cars from a freeway interchange, did a decent job of deleting a pole in front of a cathedral and even erased a hiking couple from a cave scene. The shots were done automatically in “one quick process, ” Oxholm said, after a mask was first drawn around the object to be removed — much as you do with Photoshop. It’s not totally perfect, however. Shadow traces are visible on the cave floor, and the cathedral is blurred in spots where the pole used to be. Even at this early stage, though, the tool could do much of the grunt-work, making it easier for a human user to do the final touch-ups. I’d love to see Adobe release it in preview as soon as possible, even if it’s not perfect, as it looks like it could be a major time saver — I sure could’ve used it for that macaroni.

Excerpt from:
Adobe’s ‘Cloak’ experiment is a content-aware eraser for video

Collapsible 24" Display Explodes Onto Kickstarter

This might be the fastest we’ve ever seen anything get Kickstarted. Just 35 minutes after going live SPUD , the Spontaneous Pop-Up Display with a 24-inch screen, hit its $33, 000 goal. In scarcely 24 hours it’s already over $130, 000 pledges and climbing. In the pitch video, you get a much better look at the system than in the sneak peek we showed you on Monday: Here are some of the details we’ve been waiting for: The screen isn’t glass, but a crack- and chip-proof vinyl composite that is wrinkle-resistant. The rear projection onto the screen reportedly “promises ultra-sharp images, ” and the developers report that it does not require a dim environment to be used in. Should the device crack $250, 000 in funding (which it surely will, given that there’s still 44 days left in the campaign), the battery will be upgraded to last for a maximum of 10 hours rather than 6. SPUD is expected to retail for $499; early-bird pledges at a reduced $349 price are all gone, but at press time there were still some $399 early-birds available. Shipping is scheduled for June of next year. Here’s the closest thing they’ve got to a real-world demo: This thing looks pretty amazing. Never mind the entertainment applications; this thing would be a boon to designers who are traveling with a laptop and unexpectedly need to attend to CAD emergencies.

Follow this link:
Collapsible 24" Display Explodes Onto Kickstarter