Adobe’s ‘Cloak’ experiment is a content-aware eraser for video

Glamorous show-reels from shows like Game of Thrones get all the fame, but a lot of VFX work is mundane stuff like removing cars , power lines and people from shots. Adobe’s research team is working on making all of that easier for anyone, regardless of budget, thanks to a project called “Cloak.” It’s much the same as ” content-aware fill” for Photoshop, letting you select and then delete unwanted elements, with the software intelligently filling in the background. Cloak does the same thing to moving video, though, which is a significantly bigger challenge. Engadget got an early look at the tech, including a video demonstration and chance to talk with Adobe research engineer Geoffrey Oxholm and Victoria Nece, product manager for video graphics and VFX. At the moment, the technology is in the experimental stages, with no set plans to implement it. However, Adobe likes to give the public ” Sneaks ” at some of its projects as a way to generate interest and market features internally to teams. An example of that would be last year’s slightly alarming “VoCo” tech that lets you Photoshop voiceovers or podcasts. That has yet to make it into a product, but one that did is “Smartpic” which eventually became part of Adobe’s Experience Manager. The “Cloak” tech wouldn’t just benefit Hollywood — it could be useful to every video producer. You could make a freeway look empty by removing all the cars, cut out people to get a pristine nature shot, or delete, say, your drunk uncle from a wedding shot. Another fun example: When I worked as a compositer in another life , I had to replace the potato salad in a shot with macaroni, which was a highly tedious process. Object removal will also be indispensable for VR, AR, and other types of new video tech. “With 360 degree video, the removal of objects, the crew and the camera rig becomes virtually mandatory, ” Nece told Engadget. Content-aware fill on photos is no easy task in the first place, because the computer has to figure out what was behind the deleted object based on the pixels around it. Video increases the degree of difficulty, because you have to track any moving objects you want to erase. On top of that, the fill has to look the same from frame to frame or it will be a glitchy mess. “It’s a fascinating problem, ” Oxholm said. “Everything is moving, so even if you nail one frame, you have to be consistent.” Luckily, video does have one advantage over photos. “The saving grace is that we can see behind the thing we want to remove, ” says Oxholm. “If you’ve got a microphone to remove, you can see behind the microphone.” In other words, if you’re doing shot of a church with a pole in the way, there’s a good chance you have a different angle with a clean view of the church. With 360 degree video, the removal of objects, the crew and the camera rig becomes virtually mandatory. Another thing making content-aware fill for video much more feasible now is the fact that motion-tracking technology has become so good. “We can do really dense tracking, using parts of the scene as they become visible, ” said Oxholm. “That gives you something you can use to fill in.” The results so far, as shown in the video above, are quite promising. The system was able to erase cars from a freeway interchange, did a decent job of deleting a pole in front of a cathedral and even erased a hiking couple from a cave scene. The shots were done automatically in “one quick process, ” Oxholm said, after a mask was first drawn around the object to be removed — much as you do with Photoshop. It’s not totally perfect, however. Shadow traces are visible on the cave floor, and the cathedral is blurred in spots where the pole used to be. Even at this early stage, though, the tool could do much of the grunt-work, making it easier for a human user to do the final touch-ups. I’d love to see Adobe release it in preview as soon as possible, even if it’s not perfect, as it looks like it could be a major time saver — I sure could’ve used it for that macaroni.

Excerpt from:
Adobe’s ‘Cloak’ experiment is a content-aware eraser for video

Théoriz recreates the Holodeck with AR tech and projectors

If you had to list the most mind-blowing tech demos in recent memory, Microsoft’s Hololens AR headset would need to be included, as would its projector-enhanced Illumiroom . A company called Théoriz from Lyon, France has married both of those things to create a “mixed reality room” that uses projector tech, motion tracking and augmented reality together. Its latest technology demo video made it seem like we’re closer to Star Trek ‘s Holodeck than ever before, so we went to take a closer look. Théoriz is located at the ” Pole Pixel , ” a sprawling collection of studios east of Lyon used by Panavision and other cinema companies. The company’s mission is as much artistic as tech-oriented, so the engineers are both bohemian and code-savvy. “We are a team mostly composed of creative engineers, ” says Th éoriz co-founder David-Alexandre Chanel. “Engineers who have an artistic sensibility and also do good code.” To wit, the company has created some very technical and very whimsical projects, including an art installation called ” Doors ” featuring portals that open up to an infinite space and change perspective as the viewer moves, and ” Are You my Friend , ” an industrial robot that communicates with the exhibit-goers via a keyboard. Art aside, the mixed reality room tech is impressive. The team tracks the camera (typically a RED model that can record and output in real time) with an HTC Vive Tracker , and feeds the data to a computer running the Unity game engine . That generates digital environments like flying space skulls, a Minecraft-like room with holes that open up on the floor and geometric shapes that interact with actors to form stairs, wells or small hills. The computer syncs everything together, so that when the camera operator pans or tilts, the Unity scenes tilt or pan to match. Those are then beamed into the room via six projectors — four for the floors, and two on the walls. At the same time, three Kinect-style 3D cameras, combined with Théoriz’s in-house “Augmenta” system, detect the position of the actors so they can interact with the environment. Everything must be processed and played back in real-time by the Unity based system, something that required some clever coding and computing horsepower. In the resulting videos, live actors interact seamlessly with virtual environments, creating a hallucinogenic effect. “It’s called mixed reality because we use and merge things from the virtual world with reality, ” says Chanel. For instance, dancers can make the walls “move” with their movements and bat away flying asteroids. In the latest demo video (above), actors interact with bizarre geometric environments, opening up holes in the floor where they move and walking up fake stairs. Though most of the tech is off the shelf, none of it is intended for consumers — at least, not yet. For now, the company wants to just sell its services for things like music videos, dance performances, art installations and other live events. At the same time, they’re improving the tech to make it more realistic and immersive. “We think that by changing the content creation process, we can open new creative possibilities and achieve unprecedented kind[s] of visuals, ” says Chanel. The next project will test everything Th éoriz has learned so far, both artistically and technically. “We’re trying for the first time to show an artistic video with two dancers, ” Chanel says. “And they’re going to dance and interact in the virtual world, moving through different kinds of totally surreal scenes.” Eventually, Théoriz might make its software available to other companies, but for now it’s just trying to make its services more compelling for artists and audiences. “It’s a new field, ” says Chanel. “We still have to evangelize it and create demand so it can eventually find its place.” And the best way to do that? “Seduce the audience with something new, poetic and unexpected, ” he says. Source: Théoriz

Excerpt from:
Théoriz recreates the Holodeck with AR tech and projectors

Facebook’s new 360 cameras bring multiple perspectives to live videos

Last year, Facebook announced the Surround 360 , a 360-degree camera that can capture footage in 3D and then render it online via specially designed software. But it wasn’t for sale. Instead, the company used it as a reference design for others to create 3D 360 content, even going so far as to open source it on Github later that summer. As good as the camera was, though, it still didn’t deliver the full VR experience. That’s why Facebook is introducing two more 360-degree cameras at this year’s F8 : the x24 and x6. The difference: These cameras can shoot in six degrees of freedom, which promises to make the 360 footage more immersive than before. The x24 is so named because it has 24 cameras; the x6, meanwhile, has — you guessed it — six cameras. While the x24 looks like a giant beach ball with many eyes, the x6 is shaped more like a tennis ball, which makes for a less intimidating look. Both are designed for professional content creators, but the x6 is obviously meant to be a smaller, lighter and cheaper version. Both the x24 and the x6 are part of the Surround 360 family. And, as with version one (which is now called the Surround 360 Open Edition), Facebook doesn’t plan on selling the cameras themselves. Instead, Facebook plans to license the x24 and x6 designs to a “select group of commercial partners.” Still, the versions you see in the images here were prototyped in Facebook’s on-site hardware lab (cunningly called Area 404) using off-the-shelf components. The x24 was made in partnership with FLIR, a company mostly known for its thermal imaging cameras, while the x6 prototype was made entirely in-house. But before we get into all of that, let’s talk a little bit about what sets these cameras apart from normal 360 ones. With a traditional fixed camera, you see the world through its fixed lens. So if you’re viewing this content (also known as stereoscopic 360) in a VR headset and you decide to move around, the world stays still as you move, which is not what it would look like in the real world. This makes the experience pretty uncomfortable and takes you out of the scene. It becomes less immersive. With content that’s shot with six degrees of freedom, however, this is no longer an issue. You can move your head to a position where the camera never was, and still view the world as if you were actually there. Move your head from side to side, forwards and backwards, and the camera is smart enough to reconstruct what the view looks like from different angles. All of this is due to some special software that Facebook has created, along with the carefully designed pattern of the cameras. According to Brian Cabral, Facebook’s Engineering Director, it’s an “optimal pattern” to get as much information as possible. I had the opportunity to have a look at a couple of different videos shot with the x24 at Facebook’s headquarters (Using the Oculus Rift, of course). One was of a scene shot in the California Academy of Sciences, specifically at the underwater tunnel in the Steinhart Aquarium. I was surprised to see that the view of the camera would follow my own as I tilted my head from left to right and even when I crouched down on the floor. I could even step to the side and look “through” where the camera was, as if it wasn’t there at all. If the video was shot through a traditional 360 camera, it’s likely that I would see the camera tripod if I looked down. But with the x24, I just saw the floor, as if I was a disembodied ghost floating around. Another wonderful thing about videos shot with six degrees of freedom is that each pixel has depth. Each pixel is literally in 3D. This a breakthrough for VR content creators, and opens up a world of possibilities in visual effects editing. This means that you can add 3D effects to live action footage, a feat that usually would have required a green screen. I saw this demonstrated in the other video, which was of a scene shot on the roof of one of Facebook’s buildings. Facebook along with Otoy, a Los Angeles-based cloud rendering company, were able to actually add effects to the scene. Examples include floating butterflies, which wafted around when I swiped at them with a Touch controller. They also did a visual trick where I could step “outside” of the scene and encapsulate the entire video in a snow globe. All of this is possible because of the layers of depth that the footage provides. That’s not to say there weren’t bugs. The video footage I saw had shimmering around the edges, which Cabral said is basically a flaw in the software that they’re working to fix. Plus, the camera is unable to see what’s behind people, so there’s a tiny bit of streaking along the edges. Still, there’s lots of potential with this kind of content. “This is a new kind of media in video and immersive experiences, ” said Eric Cheng, Facebook’s head of Immersive Media, who was previously the Director of Photography at Lytro. “Six degrees of freedom has traditionally been done in gaming and VR, but not in live action.” Cheng says that many content creators have told him that they’ve been waiting for a way to bridge live action into these “volumetric editing experiences.” Indeed, that’s partly why Facebook is partnering with a lot of post-production companies like Adobe, Foundry and Otoy in order to develop an editing workflow with these cameras. “Think of these cameras as content acquisition tools for content creators, ” said Cheng. But what about other cameras, like Lytro’s Immerge for example? “There’s a large continuum of these things, ” said Cabral. “Lytro sits at the very very high-end.” It’s also not nearly as portable as both the x24 and x6, which are both designed for a much more flexible and nimble approach to VR capture. As for when cameras like these will make their way down to the consumer level, well, Facebook says that will come in future generations. “That’s the long arc of where we’re going with this, ” said CTO Mike Schroepfer. “Our goal is simple: We want more people producing awesome, immersive 360 and 3D content, ” said Schroepfer. “We want to bring people up the immersion curve. We want to be developing the gold standard and say this is where we’re shooting for.” Click here to catch up on the latest news from F8 2017!

See more here:
Facebook’s new 360 cameras bring multiple perspectives to live videos

Scan of original 1977 35mm print of Star Wars released online

A restored high-definition digital scan, taken from 35mm prints of the original, unmolested version of Star Wars, is now available online to those who are looking. May the celluloid will be with you. Always. While this isn’t the first time that attempts have been made to restore Star Wars to its original theatrical version—that’s the one without the much-maligned CGI effects and edits of later “special” editions—it is the first to have been based entirely on a single 35mm print of the film, rather than cut together from various sources. Here’s a post from the team who located prints and restored the film : Despite having access to the original source, and to all the cleaned footage as the project progressed, I was still completely blown away by the final version. I had no idea it could look so good! Honestly! Way back at the start I had created a comparison clip with the 2006 Bonus DVD on top and the raw scan of LPP on the bottom, in order to see which frames (if any) were missing from the print, and I remember being rather alarmed that it made the GOUT look good!: Creator George Lucas said, in disowning his original work, that all the copies of it were destroyed. “The only issue with Team Negative 1’s version of the film,” reports Mark Walton, “is that it isn’t exactly legal.” Here it is, compared to the official Blu-Ray: https://youtu.be/mo24gFFk7WM https://youtu.be/pFp9bSp-fro https://youtu.be/3Wjx01CuqDs

Link:
Scan of original 1977 35mm print of Star Wars released online

August unveils a Homekit-enabled lock, keypad and doorbell camera

August Home Inc, makers of the August Smart Lock , announced the forthcoming release of three new products as well as a new service at a press event in San Francisco today. The new product lineup includes a second-generation Smart Lock, a Smart Keypad and a Smart Doorbell Camera. The lock itself offers a number of design improvements over its predecessor including a magnetic faceplate that won’t pop off every time you manually engage the lock as well as a stainless steel indicator on the lock sleeve. Plus, being Homekit-enabled, the new Smart Lock will allow users to issue voice commands through Siri rather than opening the app itself. It’s available for order today and will retail for $230. The older generation locks will be discounted to an even $200. The weather-proofed keypad will make granting temporary access to service providers (say, the delivery guy or your dog walker) much easier. Instead of forcing folks to download the August app just to use a one-time Bluetooth code, users will be able to program a 4 – 6 digit PIN into the keypad themselves and then share the code via text message or what-have-you. And, like the older shared BT codes, these PINs can be set to last for as many hours, days or uses as you’d like. The keypad will retail for $80 when it becomes available in the next few weeks. The Doorbell Cam is also weather-proofed, Wi-Fi enabled and pretty darn clever. It’s equipped with a standard motion detector but will also turn on the camera when it detects movement near the door. So instead of issuing a push notification every time a car drives past your house, the motion detector will also turn on the Wi-Fi camera (which is equipped with human-detection software) to ensure that the movement is actually a person standing on your stoop, not just parking at the curb. The camera will retail for $200. What’s more, the camera integrates with the rest of the August devices allowing you to remotely unlock your door for the delivery guy and record both him entering and exiting the residence. This ensures that he doesn’t swiped stuff from your house while dropping a package in the foyer. Plus, this way, you’ll never miss a delivery. The only drawback is that the camera is powered by your existing doorbell wiring so if you don’t have a doorbell already installed on your door (like me) you are SOL. Finally, August is expanding the scope of its temporary access system to allow services, not just individuals access to your door. “We’re announcing a new service that we call Access, ” August co-founder Jason Johnson told Engadget . “Probably the best way to describe it is, much like, new transportation apps like Uber helps consumers find service providers [in this case, drivers for hire – ed.] are in third party transportation logistics, we’re in third party service logistics. We help consumers find services that have integrated with our platform and we help them connect in a trusted and secure way.” The company announced 12 initial launch partners today including Sears, Postmates, Pro.com, Handy, Fetch, Shyp, BloomNation, Envoy, Rinse, HelloAlfred, Wag!, Pillow, and Doorman. Instead of having to give partial access to a delivery person every single time you order from Postmates, users will be able to grant access to Postmates the company. This may seem like a security issue however it’s not that far off from what many August users already do with trusted service partners like these. Plus, the lock maintains copious logs about who opened which August-locked door with which access code so tracking down the responsible party should your valuable go missing during a delivery will be a piece of cake.

See more here:
August unveils a Homekit-enabled lock, keypad and doorbell camera

Here’s the same picture taken with every iPhone that has existed

Though the new iPhone is called the iPhone 6, we’re actually on the 8th generation of iPhone that has existed. But who cares about that. Let’s just see how much the camera—maybe the most important feature on the iPhone after messaging— has improved over those 8 generations. Hint: a lot. Read more…

View post:
Here’s the same picture taken with every iPhone that has existed

A Speeding Ticket Camera Company Is Doctoring Evidence Photos

Nothing feels worse than getting a speeding ticket in the mail. What. The. Truck. As if they weren’t bad enough as-is, a report from the Washington DC metropolitan area suggest that the cameras used to catch you might not be playing by the rules. In fact, the camera contractors might be fudging the evidence to make sure you can’t challenge the tickets. Read more…        

Follow this link:
A Speeding Ticket Camera Company Is Doctoring Evidence Photos