This is what a 50-qubit quantum computer looks like

From afar, it looks like a steampunk chandelier. Or an intricate collection of tubes and wires that culminate in a small steel cylinder at the bottom. It is, in fact, one of the most sophisticated quantum computers ever built. The processor inside has 50 quantum bits, or qubits, that process tasks in a (potentially) revolutionary way. Normally, information is created and stored as a series of ones and zeroes. Qubits can represent both values at the same time (known as superposition), which means a quantum computer can theoretically test the two simultaneously. Add more qubits and this hard-to-believe computational power increases. Last November, IBM unveiled the world’s first 50-qubit quantum computer. It lives in a laboratory, inside a giant white case, with pumps to keep it cool and some traditional computers to manage the tasks or algorithms being initiated. At CES this year, the company brought the innards — the wires and tubes required to send signals to the chip and keep the system cool — so reporters and attendees could better understand how it works. The biggest challenge, IBM Research Vice President Jeffrey Welser told me, is isolating the chip from unwanted “noise.” This includes electrical, magnetic and thermal noise — just the temperature of the room renders the whole machine useless. That’s where the pumps would normally come in. From top to bottom, the system gradually cools from four Kelvin — liquid-helium temperatures — to 800 milliKelvin, 100 milliKelvin and, finally, 10 milliKelvin. Inside the canister, that’s 10 millionths of a degree absolute zero. The wires, meanwhile, carry RF-frequency signals down to the chip. These are then mapped onto the qubits, executing whatever program the research team wishes to run. The wiring is also designed in a way to ensure that no extraneous noise — including heat — is transported to the quantum computer chip at the bottom. Many in the industry have suggested that a 50-qubit system could achieve “quantum supremacy.” The term refers to the moment when a quantum computer is able to outperform a traditional system or accomplish a task otherwise thought impossible. The problem, though, is that quantum computers are only compatible with certain algorithms. They’re well-suited to quantum chemistry, for instance, and material simulations. But it’s unlikely you’ll ever use a quantum computer to complete a PowerPoint presentation. “The world is not classical, it’s quantum, so if you want to simulate it you need a quantum computer, ” Welser said. Researchers have already conducted experiments with quantum computers. Scientists at IBM were able to simulate beryllium hydride (BeH2) on a seven-qubit quantum processor last September, for example. But critics want to see a quantum computer accomplish something more tangible, which is more meaningful for the everyday consumer. That day, unfortunately, could still be a long way off. “Somewhere between 50 and 100 qubits, we’ll reach the point where we can at least say very clearly, ‘I’ve just simulated a molecule here in a few minutes time that would have taken this giant system five days to do.’ That level we’ll be at fairly rapidly. When it gets to something that the public will understand in terms of an application they would use themselves, I can’t really speculate at this point, ” Welser said. Click here to catch up on the latest news from CES 2018.

Visit link:
This is what a 50-qubit quantum computer looks like

MAME devs are cracking open arcade chips to get around DRM

Enlarge / A look inside the circuitry of a “decapped” arcade chip. (credit: Caps0ff ) The community behind the Multiple Arcade Machine Emulator (MAME) has gone to great lengths to preserve thousands of arcade games run on hundreds of different chipsets through emulation over the years. That preservation effort has now grown to include the physical opening of DRM-protected chips in order to view the raw code written inside them—and it’s an effort that could use your crowdsourced help. While dumping the raw code from many arcade chips is a simple process, plenty of titles have remained undumped and unemulated because of digital-rights-management code that prevents the ROM files from being easily copied off of the base integrated circuit chips. For some of those protected chips, the decapping process can be used as a DRM workaround by literally removing the chip’s “cap” with nitric acid and acetone. With the underlying circuit paths exposed within the chip, there are a few potential ways to get at the raw code. For some chips, a bit of quick soldering to that exposed circuitry can allow for a dumped file that gets around any DRM further down the line. In the case of chips that use a non-rewritable Mask ROM , though, the decappers can actually look through a microscope (or high-resolution scan) to see the raw zeroes and ones that make up the otherwise protected ROM code. Read 5 remaining paragraphs | Comments

Originally posted here:
MAME devs are cracking open arcade chips to get around DRM

Intel: Our next chips will be a ‘generation ahead’ of Samsung

Intel says that when its long-delayed 10-nanometer Cannon Lake chips finally arrive, they’ll be a “full generation ahead” of rivals Samsung and TMSC, thanks to “hyper scaling” that squeezes in twice as many transistors. That will yield CPUs with 25 percent more performance and 45 percent lower power use than its current Kaby Lake chips when they ship towards the end of 2017. Furthermore, Intel thinks the tech will keep Moore’s Law going and give it a 30 percent cost advantage over competitors like AMD. These are bold words, particularly since its chief rival Samsung is already producing 10-nanometer chips like the Snapdragon 835 , the world’s hottest mobile CPU. However, Intel says that while the chip trace sizes are the same, it has better feature density, letting it squeeze in twice as many transistors as chips from Samsung. That in turn produces smaller die sizes, which “allows Intel to continue the economics of Moore’s Law, ” the company explains in a PowerPoint . Down the road, Intel will also release enhanced 10-nanometer tech called 10+ and 10++. To be sure, that’s mostly marketing-speak that will help it keep consumer’s attention until 7-nanometer chips come along. However, the refinements will offer a further 15 percent performance and 30 percent efficiency boost, it says. Intel laid out all this chip info as part of its Technology and Manufacturing Day yesterday, probably to sooth buyers and investors. Not only did Samsung and Qualcomm beat it to the punch for 10-nanometer chips, AMD also unveiled Ryzen processors that could eat into both its high- and low-end PC markets. However, Intel sounds pretty confident about its next-gen chips and beyond. It’s planning on building 10-nanometer chips for three years before moving on to 7-nanometer tech, about the same cycle length as its current 14-nanometer chips. “We are always looking three generations –- seven to nine years — ahead, ” says Intel Executive VP Stacy J. Smith. “Moore’s Law is not ending at any time we can see ahead of us.” Source: Intel (PDF)

Read More:
Intel: Our next chips will be a ‘generation ahead’ of Samsung

NVIDIA brings desktop-class graphics to laptops

With the GeForce GTX 1080, NVIDIA pushed the boundaries of what a $600 graphics card can do . That flagship card was joined by the GTX 1070 and GTX 1060 , two lower-power cards based on the same 16nm Pascal architecture at a much more affordable price. Now, it’s bringing mobile versions of those cards that match their desktop counterparts in almost every area — including being VR ready. That’s not hyperbole. The top-of-the-line 1080M has 2, 560 CUDA cores and 8GB of 10Gbps GDDR5x memory. The desktop chip has the same. The only difference is clock speed: it’s set at 1, 556MHz, while the desktop version is 1, 607MHz. The two do share the same boost clock (1, 733MHz) though, and both have access to all the new technology introduced for the Pascal architecture . That means simultaneous multi-projection, VRWorks, Ansel and the rest. If you want an idea what those specs translate to in real-world performance, how’s this: when paired with an i7-6700HQ (a quad-core 2.6GHz chip with 3.5GHz turbo), Mirror’s Edge Catalyst , 126; Overwatch , 147; Doom , 145; Metro Last Light , 130; Rise of the Tomb Raider , 125. Those are the 1080M’s FPS figures when playing at 1080p with “ultra” settings at 120Hz. NVIDIA is really pushing 120Hz gaming, and many of the first crop of Pascal laptops will have 120Hz G-Sync displays. 4K gaming, too, is more than possible. At 4K with “high” settings the same setup can push 89FPS on Overwatch , 70FPS with Doom , and 62FPS with Metro Last Light ( according to NVIDIA). Only Mirror’s Edge Catalyst and Rise of the Tomb Raider fall short of 60FPS, both clocking in at a very playable 52FPS. At the chip’s UK unveil, NVIDIA showed the new Gears of War playing in 4K in real-time, and there were absolutely no visible frame drops. With figures like that, it goes without saying that VR will be no problem for the 1080M. The desktop GTX 980 is the benchmark for both the HTC Vive and Oculus Rift, and the 1080M blows it away. If you’re looking for more performance, the 1080M supports overclocking of course — NVIDIA suggests as high as 300MHz — and you can expect laptops sporting two in an SLI configuration soon. The major drawback for the 1080M is power. We don’t know its exact TDP yet, but given the near-identical desktop version runs at 180W, you’d imagine it’s got to be at least 150W. NVIDIA has tech that counters that heavy power load when you’re not plugged in, of course. Chief among these is BatteryBoost, which allows you to set a framerate (i.e. 30FPS), and downclocks the GPU appropriately to save power — if your card is capable of pushing 147FPS plugged in, that’s going to be a fair amount of power saved. Whatever the battery savings possible, though, it won’t change the fact that the 1080M is only going to slide into big laptops. That’s fine for those already used to carrying around behemoths on the go, but plenty of gamers prefer something more portable. Enter the 1070M. NVIDIA says this chip will fit into any chassis that currently handles the 980M, which covers a lot of laptops. Just like the 1080M, the 1070M matches its desktop sibling in many ways. You’ve actually got slightly more in the way of CUDA cores — 2, 048 vs. the desktop’s 1, 920, but again they’re clocked slower (1, 442MHz vs. 1, 506MHz). Memory is the same — 8GB 8Gbps GDDR5 — and it too benefits from both the Pascal architecture itself and the new software features that come with it. GTX 1080 GTX 1080M GTX 1070 GTX 1070M CUDA cores 2, 560 2, 560 1, 920 2, 048 Base clock 1, 607MHz 1, 556MHz 1, 506MHz 1, 442MHz Boost clock 1, 733MHz 1, 733MHz 1, 683MHz 1, 645MHz Memory 8GB GDDR5X 8GB GDDR5X 8GB GDDR5 8GB GDDR5 Memory speed 10Gbps 10Gbps 8Gbps 8Gbps Memory Bandwidth 320GB/sec 320GB/sec 256GB/sec 256GB/sec When faced off against the desktop 1070, the 1070M holds its own. In nearly every test we saw, it got within a couple of percentiles of the desktop card. We’re talking 77FPS in The Witcher 3 (1080p maxed settings, no HairWorks) vs. 79.7FPS on the 1070; 76.2FPS in The Division (1080p ultra) vs. 76.6FPS; and 64.4FPS in Crysis 3 (1080p very high) vs. 66.4FPS. The one outlier was Grand Theft Auto V , which dropped down to 65.3FPS vs. 73.7FPS on the desktop 1070. 4K gaming is a stretch on the desktop 1070, and that carries over here, but this card is more-than VR ready. NVIDIA says that it’ll support factory overclocking on the 1070M soon, so you may see laptops offering a little more grunt “in a couple of months.” Rounding off the lineup is the 1060M, the mobile version of NVIDIA’s $249 “budget” VR-ready card. It’s something of the exception to the rule here. Yes, it offers 1, 280 CUDA cores and 6GB 8Gbps GDDR5 memory, which is equal to the desktop 1060. But at the lower end of the range the fact that they’re clocked lower (1, 404MHz vs. 1, 506MHz) hurts performance quite a bit more. In side-by-side comparisons, NVIDIA’s benchmarks suggest you’ll get within ten percent or so of the desktop card. That’s not to say that the 1060M is a slouch. For traditional gaming, you’re not going to hit 60FPS at 1080P in every game without thinking about settings, but if you can play it on a desktop GTX 980, it’s probably a safe bet that the 1060M can handle it. That’s insanely impressive when you consider that the 1060M will fit into the same chassis as the 970M — think “ultra portable” gaming laptops. GTX 1060M GTX 1060 GTX 980 CUDA cores 1, 280 1, 280 2, 048 Base clock 1, 404MHz 1, 506MHz 1, 126MHz Boost clock 1, 670MHz 1, 708MHz 1, 216MHz Memory 6GB GDDR5* 6GB GDDR5 4GB GDDR5 Memory speed 8Gbps 8Gbps 7Gbps Memory Bandwidth 192GB/sec 192GB/sec 224GB/sec *Up to In reality, the 10-percent gap between the 1060 and the 1060M probably makes it slightly slower than the GTX 980, but the difference is almost negligible. I wasn’t able to push the 1060M too hard on the “VR ready” promise — you can read about the demo and why the 1060M matters in a separate article — but the demo I had was solid. And really, being able to plug an Oculus into something as slim as a Razer Blade was unthinkable a few months ago, so it’s probably best not to complain. Acer, Alienware, Asus, Clevo, EVGA, HP, Gigabyte, Lenovo, MSI, Origin, Razer, Sager and XMG are just some of the OEMs signed up to make laptops with the new Pascal chips. Many will announce updated and all-new models today, while some might hold off a while. But expect lots of super-powerful, VR-ready gaming laptops very soon.

View post:
NVIDIA brings desktop-class graphics to laptops

Intel 2016-2017 Layoffs: 12,000 Cuts, 11% Positions Will Be Eliminated

Intel job cuts & layoffs arrive as the chip giant sees deeper PC market weakness, and a need to shift faster to cloud data centers & the Internet of Things. The post Intel 2016-2017 Layoffs: 12, 000 Cuts, 11% Positions Will Be Eliminated appeared first on ChannelE2E .

Read this article:
Intel 2016-2017 Layoffs: 12,000 Cuts, 11% Positions Will Be Eliminated

Intel gets go-ahead for $4 billion chip plant in Ireland, will produce its next-gen 14nm processors

Intel has been planning to make its Ireland base one of three global manufacturing sites for its 14nm chips since May last year , and its now been given the okay by Ireland’s lead planning agency. The new $4 billion plant will create around 4,300 jobs for the region in Co. Kildare, where Intel already has around 4,000 on staff. The two-year plan involves redeveloping its existing operation, expanding and shifting to make its smaller, more efficient 14nm process. Intel’s plans don’t stop there, however. It still plans to roll out 10nm products sometime in 2015. Filed under: Desktops , Misc , Laptops , Intel Comments Via: Silicon Republic Source: Pleanala

More:
Intel gets go-ahead for $4 billion chip plant in Ireland, will produce its next-gen 14nm processors

NVIDIA officially unveils Tegra 4: offers quad-core Cortex A15, 72 GPU cores, LTE support

One new SoC per year? That’s what NVIDIA pledged back in the fall of 2010 and today at its CES 2013 presser, it delivered with the Tegra 4’s official unveiling. The chip, which retains the same 4-plus-1 arrangement of its predecessor, arrives with a whopping 72 GeForce GPU cores — effectively offering six times the Tegra 3’s visual output and is based on the 28nm process. It also is the first quad-core processor with Cortex A15 cores on-board, and offers compatibility with LTE networks through an optional chip. NVIDIA claims this piece of silicon is the world’s fastest mobile processor, and showed a demonstration in which a Tegra 4 went head-to-head against a Nexus 10 in loading websites (you can guess which one won). The Tegra 4 also introduces new computational photography architecture, which adds a new engine to drive the image processing and significantly improve the amount of time it takes to calculate the necessary mathematics 10 times faster than current platforms. To show off its power, NVIDIA demonstrated HDR rendering on live video. The chip is also capable of implementing HDR in burst shots and with LED flash. The idea, NVIDIA says, is to eventually make our mobile cameras more powerful than DSLRs, and this is certainly a step in the right direction. Gallery: NVIDIA CES 2013 press event Joseph Volpe contributed to this report. Continue reading NVIDIA officially unveils Tegra 4: offers quad-core Cortex A15, 72 GPU cores, LTE support Filed under: Cellphones , Tablets , Wireless , Mobile , NVIDIA Comments

See the article here:
NVIDIA officially unveils Tegra 4: offers quad-core Cortex A15, 72 GPU cores, LTE support

NVIDIA Tegra 4 processor details leaked: 4-plus-1 cores, 28nm, six times the power of Tegra 3

NVIDIA’s next superhero-themed mobile chipset has possibly made an early appearance in a leaked side in China, and it looks like it wants to go toe-to-toe with the latest processors from Samsung and Qualcomm. The Tegra 4 (codenamed Wayne) will apparently offer the same power-efficient 28nm process found on its Snapdragon rival and according to the slide from Chip Hell , there’s a dizzying 72-core graphics setup. That’s apparently 20 times the power found the Tegra 2 and six times that in the last-generation Tegra 3, which powered, among other devices, the Nexus 7 . Those graphics cores will be able to power screens up to 2,560 x 1600, with 1080p output at 120Hz, while the leak also mentions 4K — if only in passing. We won’t see any increase in CPU cores this time, with the same 4-plus-1 setup , but we are likely seeing a move to ARM’s latest design, the Cortex-A15 . It’ll also catch up with USB 3.0, being NVIDIA’s first mobile chip to do so, alongside dual-channel DDR3L memory. We’ve reached out to chipmaker and we’ll let you know when we hear more, but it’s highly likely we’ll be seeing this next-generation processor early next year — say, at a mobile trade show . Filed under: Cellphones , Tablets , Mobile , NVIDIA Comments Via: Mobile Geeks Source: Chip Hell

Continue reading here:
NVIDIA Tegra 4 processor details leaked: 4-plus-1 cores, 28nm, six times the power of Tegra 3

Leaked chart appears to spill beans on Intel’s Haswell desktop CPU range

The folks over at VR-Zone have snagged a chart which purports to represent Intel’s plans for the Haswell architecture in 2013. If genuine, then we can expect at least 14 new desktop CPUs to arrive next year, including a range-topping 3.5GHz Core i7 with 400MHz of headroom in boost mode and a TDP of just 84W — i.e. midway between Sandy and Ivy Bridge in terms of power consumption, but not bad when you consider this’ll be a higher performance architecture with no transistor shrinkage . Integrated graphics have also apparently been tweaked, with a reference to HD 4600. Since we can’t expect Intel to confirm the leak, we’ll just have to file this one in the “plausible” cabinet. (What, you didn’t know we had filing cabinets?) Comments Via: Ubergizmo Source: VR-Zone (Translated)

See more here:
Leaked chart appears to spill beans on Intel’s Haswell desktop CPU range