Tech Today w/ Ken May

Archive for March 17th, 2017

GIFs are fun , and if you’re an artist, it’s easy enough to turn your art into a short animation in Photoshop. This quick tutorial from Adobe shows you how it’s done. Read more…

Categories: reader

On Friday, the Dallas FBI confirmed to Gizmodo that it has made an arrest in a case involving Newsweek journalist Kurt Eichenwald, who claims a man sent him a GIF over Twitter that triggered his epilepsy, causing him to have a seizure. A Dallas FBI spokesperson told Gizmodo that a press release with more details on… Read more…

Categories: reader

Google’s new algorithm shrinks JPEG files by 35 percent

Posted by kenmay on March - 17 - 2017

For obvious reasons, Google has a vested interest in reducing the time it takes to load websites and services. One method is reducing the file size of images on the internet, which they previously pulled off with the WebP format back in 2014, which shrunk photos by 10 percent. Their latest development in this vein is Guetzli , an open-source algorithm that encodes JPEGs that are 35 percent smaller than currently-produced images. As Google points out in its blog post, this reduction method is similar to their Zopfli algorithm that shrinks PNG and gzip files without needing to create a new format. RNN-based image compression like WebP, on the other hand, requires both client and ecosystem to change to see gains at internet scale. If you want to get technical, Guetzli (Swiss German for “cookie”) targets the quantization stage of image compression, wherein it trades visual quality for a smaller file size. Its particular psychovisual model (yes, that’s a thing ) “approximates color perception and visual masking in a more thorough and detailed way than what is achievable” in current methods. The only tradeoff: Guetzli takes a little longer to run than compression options like libjpeg. Despite the increased time, Google’s post assures that human raters preferred the images churned out by Guetzli. Per the example below, the uncompressed image is on the left, libjpeg-shrunk in the center and Guetzli-treated on the right. Source: Google Research Blog

Categories: reader

Enlarge / What’s all this gaming blather about Ryzen? Let us explain. (credit: Mark Walton) The response to AMD’s Ryzen processors with their new Zen core has been more than a little uneven. Eight cores and 16 threads for under $500 means that they’re unambiguously strong across a wide range of workloads; compute-bound tasks like compiling software and compressing video cry out for cores, and AMD’s pricing makes Ryzen very compelling indeed. But gaming performance has caused more dissatisfaction. AMD promised a substantial improvement in instructions per cycle (IPC), and the general expectation was that Ryzen would be within striking distance of Intel’s Broadwell core. Although Broadwell is now several years old—it first hit the market way back in September 2014—the comparison was relevant. Intel’s high-core-count processors—both the High End Desktop parts, with six, eight, or 10 cores, and the various Xeon processors for multisocket servers—are all still using Broadwell cores. Realistically, nobody should have expected Ryzen to be king of the hill when it comes to gaming. We know that Broadwell isn’t, after all; Intel’s Skylake and Kaby Lake parts both beat Broadwell in a wide range of games. This is the case even though Skylake and Kaby Lake are limited to four cores and eight threads; for many or most games, high IPC and high clock speeds are the key to top performance, and that’s precisely what Kaby Lake delivers. Read 56 remaining paragraphs | Comments

Categories: reader