AMD Intentionally Added Artificial Limitations To Their HDMI Adapters

An anonymous reader writes “NVIDIA was caught removing features from their Linux driver and days later Linux developers have caught and confirmed AMD imposing artificial limitations on their graphics cards in the DVI-to-HDMI adapters that their driver will support. Over years AMD has quietly been adding an extra EEPROM chip to their DVI-to-HDMI adapters that are bundled with Radeon HD graphics cards. Only when these identified adapters are detected via checks in their Windows and Linux Catalyst driver is HDMI audio enabled. If using a third-party DVI-to-HDMI adapter, HDMI audio support is disabled by the Catalyst driver. Open-source Linux developers have found this to be a self-imposed limitation and that the open-source AMD Linux driver will work fine with any DVI-to-HDMI adapter.” Read more of this story at Slashdot.

More here:
AMD Intentionally Added Artificial Limitations To Their HDMI Adapters

Malware Now Hiding In Graphics Cards

mask.of.sanity writes “Researchers are closing in on a means to detect previously undetectable stealthy malware that resides in peripherals like graphics and network cards. The malware was developed by the same researchers and targeted host runtime memory using direct memory access provided to hardware devices. They said the malware was a ‘highly critical threat to system security and integrity’ and could not be detected by any operating system.” Read more of this story at Slashdot.

Read this article:
Malware Now Hiding In Graphics Cards

Apple iPhone 5S: Everything You Need to Know

There’s a new iPhone. Well, to be completely accurate there are two new iPhones. But the new iPhone 5S is Apple’s flagship phone and it’s the best iPhone you can buy. It looks exactly like how the iPhone 5 looked last year but with improved internals and guts that will make everything run even faster. Read more…        

View article:
Apple iPhone 5S: Everything You Need to Know

The New iPhone’s A7 Chip Proves That Smartphone Innovation Is Slowing Down

With every new iPhone, most of the discussion centers around its look and not what comes inside. But, according to multiple reports , Apple has designed a new 64-bit dual-core A7 system on a chip for the iPhone 5S . It is supposedly 31 percent faster, representing a serious slowdown in spec improvement. It proves that the smartphone market may have matured and that existing smartphone owners won’t feel the urge to upgrade to a new model anymore. When it comes to smartphone chips, Apple is a lone ranger. It has been designing its own ARM-based chips for a couple of years. It outsources production to Samsung and other manufacturers. But the important part is that only Apple devices use Apple chips. So far, this strategy has proven to be successful. The iPhone 4S was twice as powerful as the iPhone 4, and had nine times the graphics processing capabilities. The iPhone 5 was once again twice as fast as the iPhone 4S, with twice the graphics performance. That’s why this year’s 31 percent performance boost is lackluster. If the new iPhone is indeed called the iPhone 5S, the ‘S’ will certainly not stand for ‘speed’. A 31% CPU speed increase sounds like a huge failure to me, specially considering previous generations showed ~100% improvements.— Paul Haddad (@tapbot_paul) August 26, 2013 On paper, Android phones are more powerful. Right now, the Snapdragon 800 and Tegra 4 both come with at least 4 cores and more raw power . It seems that Apple doesn’t want to compete in the spec game anymore, without giving any explanation. The main advantage is that Apple can optimize the A7 for its own set of APIs, making it feel faster than it actually is. Even though Snapdragons have more GHz, iPhone apps are still fast because Apple takes advantage of its chip architecture like no one else. The gap isn’t as wide as expected. Moreover, Apple’s custom design strategy improves battery performance. Apple needs to reduce both component costs and R&D costs Yet, why were the A6 and the A5 much faster than their predecessors? Because smartphones were not as fast as Apple wanted them to be. If you want to use Siri or play nice games, you need the iPhone 4S. If you want to use the upcoming AirDrop feature, you need the iPhone 5. Now it’s not the same story. Apple probably thinks that the iPhone 5 can run everything perfectly fine, and there is no need to put more raw power. In other words, smartphones have matured. As smartphones get more widespread, Apple needs to reduce both component costs and R&D costs. The company can’t invest as much money in developing its new chips if smartphones become more and more commoditized products . The company wants to avoid hurting its margin more than it needs. The A7 needs to be future-proof. While the iPhone 5C will not receive the A7 at first, entry-level iPhones will eventually get those chips. It needs to be powerful enough and cheap enough so that Apple doesn’t have to develop yet another chip next year for its cheap iPhones. If Apple judges that current chips are becoming fast enough to power iOS for years, iPhone users shouldn’t expect speed increases. Instead, the company will bet on new features and software updates. With market maturation coming soon, Apple faces a nearly overwhelming challenge as well. How do you convince your customers to upgrade their phones? The same thing happened for the iPod — they got lighter and lighter. In 2001, the original 5GB iPod was 6.5 ounces (184 grams). In 2004, the iPod mini was 3.6 ounces (102 grams). In 2005, the iPod nano was only 1.5 oz (42 grams). At this point, if you already had an iPod and used it as a portable music player, there was no real incentive to upgrade to a new one, except more gigabytes. The same thing is true for your microwave — you only buy a new one if your old one breaks. Yet, there is one last thing that can be improved again and again on the iPhone — the camera. Everybody uses their phone as their primary camera. It’s the camera that you always have in your pocket. While it has greatly improved over the years, there’s still room for improvement — especially now that HiDPI displays are getting more popular. This single spec upgrade will make people upgrade. That’s why the most interesting news of the day isn’t the A7, but the new dedicated chip for video capturing. In addition to helping for image stabilization, it could allow you to take 120 fps videos. If the iPhone 5S can shoot smooth slow-motion videos, it could be the feature that stands out and steals the show at Apple’s event. In fact, the ‘S’ could stand for ‘slow motion’. (Image credits: Ascii.jp , Wikimedia Commons )

Continue reading here:
The New iPhone’s A7 Chip Proves That Smartphone Innovation Is Slowing Down

Hack a High End Graphics Card Into a Macbook Air

The Macbook Air is a far cry from a gaming computer, but that didn’t stop Tech Inferno forum member kloper from hacking together a system to play high end games on a nice graphics card on an Macbook Air. Read more…        

Read the article:
Hack a High End Graphics Card Into a Macbook Air

NVIDIA reveals GeForce GTX 700M series GPUs for notebooks, we go eyes-on

We’ve already seen a couple of new desktop GTX cards from NVIDIA this month, and if the mysterious spec sheet for MSI’s GT70 Dragon Edition 2 laptop wasn’t enough of a hint, the company’s got some notebook variants to let loose, too. The GeForce GTX 700M series, officially announced today, is a quartet of chips built on the Kepler architecture. At the top of the stack is the GTX 780M, which NVIDIA claims is the “world’s fastest notebook GPU,” taking the title from AMD’s Radeon HD 8970M . For fans of the hard numbers, the 780M has 1,536 CUDA cores, an 823MHz base clock speed and memory configs of up to 4GB of 256-bit GDDR5 — in other words, not a world apart from a desktop card. Whereas the 780M’s clear focus is performance, trade-offs for portability and affordability are made as you go down through the 770M, 765M and 760M. Nevertheless, the 760M is said to be 30 percent faster than its predecessor , and the 770M 55 percent faster. All of the chips feature NVIDIA’s GPU Boost 2.0 and Optimus technologies, and work with the GeForce Experience game auto-settings utility. The 700M series should start showing up in a host of laptops soon, and a bunch of OEMs have already pledged their allegiance. Check out a video with NVIDIA’s Mark Avermann after the break, where he shows off a range of laptops packing 700M GPUs, and helps us answer the most important question of all: can it run Crysis ? (Or, in this case, Crysis 3 .) Gallery: NVIDIA GeForce GTX 700M slide deck Filed under: Gaming , Laptops , Peripherals , NVIDIA Comments

See more here:
NVIDIA reveals GeForce GTX 700M series GPUs for notebooks, we go eyes-on

Intel Claims Haswell Architecture Offers 50% Longer Battery Life vs. Ivy Bridge

MojoKid writes “As with any major CPU microarchitecture launch, one can expect the usual 10~15% performance gains, but Intel apparently has put its efficiency focus into overdrive. Haswell should provide 2x the graphics performance, and it’s designed to be as power efficient as possible. In addition, the company has further gone on to state that Haswell should enable a 50% battery-life increase over last year’s Ivy Bridge. There are a couple of reasons why Haswell is so energy-efficient versus the previous generation, but the major reason is moving the CPU voltage regulator off of the motherboard and into the CPU package, creating a Fully Integrated Voltage Regulator, or FIVR. This is a far more efficient design and with the use of ‘enhanced’ tri-gate transistors, current leakage has been reduced by about 2x — 3x versus Ivy Bridge.” Read more of this story at Slashdot.

View the original here:
Intel Claims Haswell Architecture Offers 50% Longer Battery Life vs. Ivy Bridge

NVIDIA releases GeForce GTX 780 for $649, claims more power with less fan noise

It’s well over a year since the GTX 680 came out, but given how that card was a strong contender it may feel too early for an upgrade. NVIDIA knows the score, which is why it’s made a particular point of pitching this year’s card at owners of the GTX 580 instead. Upgraders from that GPU are pledged a 70 percent lift in performance, which is about double the gain a GTX 680 owner would see. On the other hand, something more people might notice — if NVIDIA’s slides prove to be accurate — is a 5dBA drop in noise pollution, as well a new approach to fan control that attracts less attention by varying revs less wildly in response to load. This is surprising given that most of the extra performance in this card stems from more transistors and greater power consumption, but that’s what we’re told. Feel free to hold out for our round-up of independent reviews or read past the break for further details. Gallery: NVIDIA GeForce GTX 780 slide deck Filed under: Desktops , Gaming , NVIDIA Comments

See more here:
NVIDIA releases GeForce GTX 780 for $649, claims more power with less fan noise

NVIDIA enables full virtualization for graphics: up to four remote users per GRID GPU

You probably won’t have noticed the following problem, unless you happen to be the IT manager in an architecture firm or other specialist environment, but it’s been an issue nonetheless. For all our ability to virtualize compute and graphical workloads, it hasn’t so far been possible to share a single GPU core across multiple users. For example, if you’d wanted 32 people on virtual machines to access 3D plumbing and electrical drawings via AutoCAD, you’d have needed to dedicate eight expensive quad-core K1 graphics cards in your GRID server stack . Now, though, NVIDIA has managed to make virtualization work right the way through to each GPU core for users of Citrix XenDesktop 7, such that you’d only need one K1 to serve that workforce, assuming their tasks were sufficiently lightweight. Does this mean NVIDIA’s K1 sales will suddenly drop by seven eighths? We couldn’t tell ya — but probably not. Filed under: Networking , Software , NVIDIA Comments

See the original article here:
NVIDIA enables full virtualization for graphics: up to four remote users per GRID GPU

NVIDIA enables full virtualization for graphics: up to eight remote users per GRID GPU

You probably won’t have noticed the following problem, unless you happen to be the IT manager in an architecture firm or other specialist environment, but it’s been an issue nonetheless. For all our ability to virtualize compute and graphical workloads, it hasn’t so far been possible to share a single GPU core across multiple users. For example, if you’d wanted 32 people on virtual machines to access 3D plumbing and electrical drawings via AutoCAD, you’d have needed to dedicate eight expensive quad-core K1 graphics cards in your GRID server stack . Now, though, NVIDIA has managed to make virtualization work right the way through to each GPU core for users of Citrix XenDesktop 7, such that you’d only need one K1 to serve that workforce, assuming their tasks were sufficiently lightweight. Does this mean NVIDIA’s K1 sales will suddenly drop by seven eighths? We couldn’t tell ya — but probably not. Filed under: Networking , Software , NVIDIA Comments

Read More:
NVIDIA enables full virtualization for graphics: up to eight remote users per GRID GPU