Gigabyte Firmware Bugs Allow the Installation of BIOS/UEFI Ransomware

An anonymous reader writes from a report via BleepingComputer: Last week, at the BlackHat Asia 2017 security conference, researchers from cyber-security firm Cylance disclosed two vulnerabilities in the firmware of Gigabyte BRIX small computing devices, which allow an attacker to write malicious content to the UEFI firmware. During their presentation, researchers installed a proof-of-concept UEFI ransomware, preventing the BRIX devices from booting, but researchers say the same flaws can be used to plant rootkits that allow attackers to persist malware for years. The two vulnerabilities discovered are CVE-2017-3197 and CVE-2017-3198. The first is a failure on Gigabyte’s part to implement write protection for its UEFI firmware. The second vulnerability is another lapse on Gigabyte’s side, who forgot to implement a system that cryptographically signs UEFI firmware files. Add to this the fact that Gigabyte uses an insecure firmware update process, which doesn’t check the validity of downloaded files using a checksum and uses HTTP instead of HTTPS. A CERT vulnerability note was published to warn users of the impending danger and the bugs’ ease of exploitation. Read more of this story at Slashdot.

Read the original post:
Gigabyte Firmware Bugs Allow the Installation of BIOS/UEFI Ransomware

Why Intel Insists Rumors Of The Demise Of Moore’s Law Are Greatly Exaggerated

From an article on FastCompany: Intel hasn’t lost its zeal for big leaps in computing, even as it changes the way it introduces new chips, and branches beyond the PC processor into other areas like computer vision and the internet of things. “Number one, too many people have been writing about the end of Moore’s law, and we have to correct that misimpression, ” Mark Bohr, Intel’s technology and manufacturing group senior fellow and director of process architecture and integration, says in an interview. “And number two, Intel has developed some pretty compelling technologies … that not only prove that Moore’s law is still alive, but that it’s going to continue to provide the best benefits of density, cost performance, and power.” But while Moore’s law soldiers on, it’s no longer associated with the types of performance gains Intel was making 10 to 20 years ago. The practical benefits of Moore’s law are not what they used to be. For each new generation of microprocessor, Intel used to adhere to a two-step cycle, called the “tick-tock.” The “tick” is where Moore’s law takes effect, using a new manufacturing process to shrink the size of each transistor and pack more of them onto a chip. The subsequent “tock” introduces a new microarchitecture, which yields further performance improvements by optimizing how the chip carries out instructions. Intel would typically go through this cycle once every two years. But in recent years, shrinking the size of transistors has become more challenging, and in 2016, Intel made a major change. The latest 14 nm process added a third “optimization” step after the architectural change, with modest performance improvements and new features such as 4K HDR video support. And in January, Intel said it would add a fourth optimization step, stretching the cycle out even further. The move to a 10 nm process won’t happen until the second half of 2017, three years after the last “tick, ” and Intel expects the new four-step process to repeat itself. This “hyper scaling” allows computing power to continue to increase while needing fewer changes in the manufacturing process. If you divide the number of transistors in Intel’s current tick by the surface area of two common logic cells, the rate of improvement still equals out to more than double every two years, keeping Moore’s law on track. “Yes, they’ve taken longer, but we’ve taken bigger steps, ” Bohr said during his three-hour presentation. Read more of this story at Slashdot.

More:
Why Intel Insists Rumors Of The Demise Of Moore’s Law Are Greatly Exaggerated

Apple is building its own GPU for the iPhone and iPad

Imagination Technologies is famous for one thing: it’s the company that provides the graphics for the iPhone. But today, Imagination announced that its longstanding relationship with Apple is coming to an abrupt end. In a statement, the outfit has conceded that Apple will replace the PowerVR GPU at the heart of its iOS devices with a graphics chip of its own design. When Apple started making the iPhone, it used a generic, Samsung-made ARM system that was paired with a PowerVR GPU. Over time, Apple began crafting more and more of its own silicon, thanks to its purchase of various chip design firms . These days, the PowerVR chip on the A10 Fusion is one of very few components that Apple didn’t have entire control over. The decision to dump Imagination was probably inevitable given the company’s trend towards control, but there may be another story here. Third-party analysts The Linley Group spotted that the iPhone 7 used the same PowerVR GT7600 GPU that was used for the iPhone 6S. That piece of silicon, while powerful, couldn’t sustain its performance for very long and so throttles the component to avoid overheating. Apple’s unsentimentally when it comes to ditching chip makers when they can’t meet performance targets is well-known. After all, the company ditched PowerPC CPUs because — so the legend goes — Intel’s X86 silicon was getting faster while IBM and Motorola dragged their feet. It’s clearly a massive blow for Imagination, which has already said that it’s planning to take the matter to the courts. After all, building a graphics platform from scratch is likely to involve using technology that other companies like Imagination has already patented. The famously-secretive Apple is also not going to look favorably upon one of its suppliers going public with this licensing dispute. Imagination shares down 67% after end of agreement with Apple pic.twitter.com/jBazTt6IjT — Francisco Jeronimo (@fjeronimo) April 3, 2017 As TechCrunch explains, the split could spell doom for Imagination, since it relies upon Apple for the bulk of its cash. Even worse, is that the news has already caused Imagination’s stock to freefall, dropping between 60 and 70 percent in the last few hours. Via: TechCrunch Source: Imagination Technologies

Read this article:
Apple is building its own GPU for the iPhone and iPad

Next-generation DDR5 RAM will double the speed of DDR4 in 2018

Enlarge (credit: materod on flickr ) You may have just upgraded your computer to use DDR4 recently or you may still be using DDR3, but in either case, nothing stays new forever. JEDEC, the organization in charge of defining new standards for computer memory, says that it will be demoing the next-generation DDR5 standard in June of this year and finalizing the standard sometime in 2018. DDR5 promises double the memory bandwidth and density of DDR4, and JEDEC says it will also be more power-efficient, though the organization didn’t release any specific numbers or targets. Like DDR4 back when it was announced, it will still be several years before any of us have DDR5 RAM in our systems. That’s partly because the memory controllers in processors and SoCs need to be updated to support DDR5, and these chips normally take two or three years to design from start to finish. DDR4 RAM was finalized in 2012 , but it didn’t begin to go mainstream until 2015 when consumer processors from Intel and others added support for it. DDR5 has no relation to GDDR5 , a separate decade-old memory standard used for graphics cards and game consoles. Read 2 remaining paragraphs | Comments

More:
Next-generation DDR5 RAM will double the speed of DDR4 in 2018

Intel: Our next chips will be a ‘generation ahead’ of Samsung

Intel says that when its long-delayed 10-nanometer Cannon Lake chips finally arrive, they’ll be a “full generation ahead” of rivals Samsung and TMSC, thanks to “hyper scaling” that squeezes in twice as many transistors. That will yield CPUs with 25 percent more performance and 45 percent lower power use than its current Kaby Lake chips when they ship towards the end of 2017. Furthermore, Intel thinks the tech will keep Moore’s Law going and give it a 30 percent cost advantage over competitors like AMD. These are bold words, particularly since its chief rival Samsung is already producing 10-nanometer chips like the Snapdragon 835 , the world’s hottest mobile CPU. However, Intel says that while the chip trace sizes are the same, it has better feature density, letting it squeeze in twice as many transistors as chips from Samsung. That in turn produces smaller die sizes, which “allows Intel to continue the economics of Moore’s Law, ” the company explains in a PowerPoint . Down the road, Intel will also release enhanced 10-nanometer tech called 10+ and 10++. To be sure, that’s mostly marketing-speak that will help it keep consumer’s attention until 7-nanometer chips come along. However, the refinements will offer a further 15 percent performance and 30 percent efficiency boost, it says. Intel laid out all this chip info as part of its Technology and Manufacturing Day yesterday, probably to sooth buyers and investors. Not only did Samsung and Qualcomm beat it to the punch for 10-nanometer chips, AMD also unveiled Ryzen processors that could eat into both its high- and low-end PC markets. However, Intel sounds pretty confident about its next-gen chips and beyond. It’s planning on building 10-nanometer chips for three years before moving on to 7-nanometer tech, about the same cycle length as its current 14-nanometer chips. “We are always looking three generations –- seven to nine years — ahead, ” says Intel Executive VP Stacy J. Smith. “Moore’s Law is not ending at any time we can see ahead of us.” Source: Intel (PDF)

Read More:
Intel: Our next chips will be a ‘generation ahead’ of Samsung

Intel’s first Optane SSD for regular PCs is a small but super-fast cache

Intel Intel is positioning its new Optane technology as the next big advancement in computer storage after SSDs, and today it’s announcing the first consumer product based on the technology. The “Intel Optane Memory” drives are 16GB and 32GB M.2 sticks that can be paired with a larger SSD or HDD to speed up total system performance. Intel’s Rapid Storage Technology allows your PC to see the two drives as one storage volume, and the software automatically caches important data to the faster drive. The Optane Memory drives will be available to order on April 24th. A 16GB drive costs $44 while a 32GB drive costs $77. Read 4 remaining paragraphs | Comments

See the article here:
Intel’s first Optane SSD for regular PCs is a small but super-fast cache

Intel still beats Ryzen at games, but how much does it matter?

Enlarge / What’s all this gaming blather about Ryzen? Let us explain. (credit: Mark Walton) The response to AMD’s Ryzen processors with their new Zen core has been more than a little uneven. Eight cores and 16 threads for under $500 means that they’re unambiguously strong across a wide range of workloads; compute-bound tasks like compiling software and compressing video cry out for cores, and AMD’s pricing makes Ryzen very compelling indeed. But gaming performance has caused more dissatisfaction. AMD promised a substantial improvement in instructions per cycle (IPC), and the general expectation was that Ryzen would be within striking distance of Intel’s Broadwell core. Although Broadwell is now several years old—it first hit the market way back in September 2014—the comparison was relevant. Intel’s high-core-count processors—both the High End Desktop parts, with six, eight, or 10 cores, and the various Xeon processors for multisocket servers—are all still using Broadwell cores. Realistically, nobody should have expected Ryzen to be king of the hill when it comes to gaming. We know that Broadwell isn’t, after all; Intel’s Skylake and Kaby Lake parts both beat Broadwell in a wide range of games. This is the case even though Skylake and Kaby Lake are limited to four cores and eight threads; for many or most games, high IPC and high clock speeds are the key to top performance, and that’s precisely what Kaby Lake delivers. Read 56 remaining paragraphs | Comments

View article:
Intel still beats Ryzen at games, but how much does it matter?

Intel buys self-driving tech firm Mobileye for $15.3 billion

Intel’s recent work with MobilEye on self-driving cars must have gone well, because the chip giant is buying its Jerusalem-based partner for $15.3 billion. The deal was first reported by Israeli business site The Marker but has now been confirmed by the two companies. MobilEye is one of the largest players in autonomous vehicle tech and was in the news recently over a spat with Tesla following a fatal Model S crash in Florida. However, it recently teamed with Intel on BMW’s iNext self-driving platform, which the automaker aims to put into service by 2021. The technology they’re working on isn’t just for BMW vehicles, though. The idea is to build a “scalable architecture” that can be used by any automaker, especially if they don’t want to build their own tech from scratch. As such, it could become a huge business for MobilEye, which may help explain the huge acquisition price. The deal is one of the largest acquisitions of an Israeli-based tech company ever. Despite a recent PC renaissance thanks to Microsoft’s Surface and other devices, desktops are still losing ground to mobile devices. That has affected Intel’s bottom line while benefiting companies like Qualcomm, which makes the chips used in many smartphones and tablets. The situation has forced Intel into other areas like wearables, connected homes and “internet of things” devices, none of which has exactly taken off yet. BREAKING: Intel to acquire Mobileye for $63.54 per share in cash, or about $15.3 billion. $INTC $MBLY — CNBC Now (@CNBCnow) March 13, 2017 Autonomous cars, on the other hand, are one of the hottest things in tech, with virtually every automaker, tech company and even peripheral firms like Uber and Lyft working on (and fighting about ) them. Even if fully autonomous cars don’t work out as planned (some critics think it’s a distant pipe dream ), autopilot tech that aids drivers and prevents accidents is available now on Tesla EVs and other cars. Ironically, MobilEye’s early success was due in large part to Tesla, and that partnership dissolved in a not-very-friendly way. Via: The Marker Source: Intel / Mobileye (.PDF)

Read the article:
Intel buys self-driving tech firm Mobileye for $15.3 billion

Intel Security Releases Detection Tool For EFI Rootkits After CIA Leak

After WikiLeaks revealed data exposing information about the CIA’s arsenal of hacking tools, Intel Security has released a tool that allows users to check if their computer’s low-level system firmware has been modified and contains unauthorized code. PCWorld reports: The release comes after CIA documents leaked Tuesday revealed that the agency has developed EFI (Extensible Firmware Interface) rootkits for Apple’s Macbooks. The documents from CIA’s Embedded Development Branch (EDB) mention an OS X “implant” called DerStarke that includes a kernel code injection module dubbed Bokor and an EFI persistence module called DarkMatter. In addition to DarkMatter, there is a second project in the CIA EDB documents called QuarkMatter that is also described as a “Mac OS X EFI implant which uses an EFI driver stored on the EFI system partition to provide persistence to an arbitrary kernel implant.” The Advanced Threat Research team at Intel Security has created a new module for its existing CHIPSEC open-source framework to detect rogue EFI binaries. CHIPSEC consists of a set of command-line tools that use low-level interfaces to analyze a system’s hardware, firmware, and platform components. It can be run from Windows, Linux, macOS, and even from an EFI shell. The new CHIPSEC module allows the user to take a clean EFI image from the computer manufacturer, extract its contents and build a whitelist of the binary files inside. It can then compare that list against the system’s current EFI or against an EFI image previously extracted from a system. Read more of this story at Slashdot.

Read More:
Intel Security Releases Detection Tool For EFI Rootkits After CIA Leak

Intel Reacts To AMD Ryzen Apparently Cutting Prices On Core i7 And i5 Processors

Less than a week after AMD announced the first line up of Ryzen processors, Intel is apparently fighting back by dropping the price of several of its processors. Rob Williams, writing for HotHardware: So, what we’re seeing now are a bunch of Intel processors dropping in price, perhaps as a bit of a preemptive strike against AMD’s chips shipping later this week — though admittedly it’s still a bit too early to tell. Over at Amazon, the prices have been slower to fall, but we’d highly recommend that you keep an eye on the following pages, if you are looking for a good deal this week. So far, at Micro Center we’ve seen the beefy six-core Intel Core i7-6850K (3.60GHz) drop from $700 to $550, and the i7-6800K (3.40GHz) drop down to $360, from $500. Also, some mid-range chips are receiving price cuts as well. Those include the i7-6700K, a 4.0GHz chip dropping from $400 to $260, and the i7-6600K, a 3.50GHz quad-core part dropping from $270 to $180. Even Intel’s latest and greatest Kaby Lake-based i7-7700K has experienced a drop, from $380 to $299, with places like Amazon and NewEgg retailing for $349. Read more of this story at Slashdot.

View article:
Intel Reacts To AMD Ryzen Apparently Cutting Prices On Core i7 And i5 Processors