Apple ordered to pay $532.9 million to an iTunes patent troll

Despite attempts to curb patent trolls , at least some of them are getting lucky — and this week, one got very lucky. A Texas court has ordered Apple to pay a whopping $532.9 million to Smartflash, a holding company which sued over claims that iTunes (specifically its copy protection, payment systems and storage) violates its patents. The Cupertino crew allegedly abused the inventions on purpose, in part because one of its execs was briefed on them over a decade ago. Apple is appealing the ruling, and points out that Smartflash hasn’t exactly been above-board in its behavior. It exists solely to extract patent royalties, and waited years to take action — it even set up its only office across from the courthouse holding the trial, making it clear that lawsuits were expected from the start. It’s hard to tell whether or not the appeal will succeed in reducing the payout (Apple wanted to limit damages to $4.5 million) or overturning the verdict. However, Apple isn’t the only target in the crosshairs. Smartflash has also sued Google and Samsung, so it could cause a lot of grief for the tech industry as a whole. [Image credit: Adam Berry/Getty Images for Apple] Filed under: Software , Apple , Samsung , Google Comments Via: MacRumors Source: Bloomberg

More:
Apple ordered to pay $532.9 million to an iTunes patent troll

Google Play Music now lets you store 50,000 songs in the cloud

Even if you’re not paying for All Access or YouTube Music Key , Google Play can be a useful way to stream your personal music collection. With its free “locker” service, you can store thousands of tunes online and stream them from the web, as well as your favorite Android and iOS devices. By keeping them in the cloud, they’re quickly accessible across a range of hardware and won’t clog up your precious onboard storage. Until now Google has set a limit of 20, 000 tracks per user, but today it’s raising that amount to 50, 000. It’s a significant increase, and one that might appeal if you have a mammoth music library full of EPs, remixes and B-sides that aren’t available from the major streaming services. Filed under: Software , Google Comments

Taken from:
Google Play Music now lets you store 50,000 songs in the cloud

Microsoft gives eligible students free Office 365 subscriptions

Turns out Microsoft had a surprise in store for students around the globe this February, and not just for those based in New York . The company’s finally bringing free Office 365 subscriptions to students outside the US, so long as they live in one of the countries (it’s quite a lengthy list ) where the product’s available. Schools will have to buy subscriptions for staff and faculty, but once they do, students (and even teachers) can self-install for no charge by using a school-issued email address at the Office in education website . After signing up, they’ll get access to the newest Office, Excel, PowerPoint, OneNote, Access and Publisher, and be able to install them on up to five computers and five phones or tablets. An account also comes with Office Online and, even better, 1TB of OneDrive storage, so users can go wild uploading anything without quickly running out of space. [Image credit: Shutterstock / Andresr] Filed under: Misc , Microsoft Comments Via: PC World Source: Microsoft , Office in Education

Original post:
Microsoft gives eligible students free Office 365 subscriptions

AMD’s next laptop processor is mostly about battery life

Intel isn’t the only chip giant championing battery life over performance this year. AMD has revealed Carrizo , a processor range that’s focused heavily on extending the running time of performance-oriented laptops. While there will be double-digit boosts to speed, there’s no doubt that efficiency is the bigger deal here. The new core architecture (Excavator) is just 5 percent faster than its Kaveri ancestor, but it chews up 40 percent less energy at the same clock rate — even the graphics cores use 20 percent less juice. Not that this is the only real trick up AMD’s sleeve. Carrizo is the first processor to meet the completed Heterogeneous System Architecture spec, which lets both the CPU and its integrated graphics share memory. That lets some tasks finish faster than they would otherwise (since you don’t need as many instructions), and it could provide a swift kick to both performance and battery life in the right conditions. You’ll also find dedicated H.265 video decoding, so this should be a good match for all the low-bandwidth 4K videos you’ll stream in the future. The new chip is pretty promising as a result. With that said, its creator will undoubtedly be racing against time. Carrizo is expected to reach shipping PCs in the second quarter of the year, or close to Intel’s mid-year target for its quad-core Broadwell processors. You may find shiny new AMD and Intel chips in PCs at around the same time — that’s good news if you’re a speed junkie, but it’s not much help to AMD’s bottom line . Filed under: Desktops , Laptops , AMD Comments Via: PCWorld Source: AMD

Read the original post:
AMD’s next laptop processor is mostly about battery life

What you need to know about HTTP/2

Look at the address bar in your browser. See those letters at the front, “HTTP”? That stands for Hypertext Transfer Protocol, the mechanism a browser uses to request information from a server and display webpages on your screen. A new version of the reliable and ubiquitous HTTP protocol was recently published as a draft by the organization in charge of creating standards for the internet, the Internet Engineering Task Force (IETF). This means that the old version, HTTP/1.1, in use since 1999, will eventually be replaced by a new one, dubbed HTTP/2. This update improves the way browsers and servers communicate, allowing for faster transfer of information while reducing the amount of raw horsepower needed. Why is this important? HTTP/1.1 has been in use since 1999, and while it’s performed admirably over the years, it’s starting to show its age. Websites nowadays include many different components besides your standard HTML, like design elements (CSS), client-side scripting (JavaScript), images, video and Flash animations. To transfer that information, the browser has to create several connections, and each one has details about the source, destination and contents of the communication package or protocol. That puts a huge load on both the server delivering the content and your browser. All those connections and the processing power they require can lead to slowdowns as more and more elements are added to a site. And if we know nothing else, it’s that people can be quite impatient. We’ve come to expect blazing-fast internet and even the slightest of delays can lead to hair pulling and mumbled swears. For companies, a slow website can translate directly into lost money, especially for online services where long load times mean a bad user experience. People have been searching for ways to speed up the internet since the days when dial-up and AIM were ubiquitous. One of the more common techniques is caching, where certain information is stored locally as opposed to transferring everything anew each time it’s requested. But others have resorted to tricks like lowering the resolution of images and videos; still others have spent countless hours tweaking and optimizing code to cut just milliseconds from their load times. These options are useful, but are really just Band-Aids. So Google decided to dramatically overhaul HTTP/1.1 and create SPDY; the results have been impressive. In general, communication between a server and a browser using SPDY is much faster, even when encryption is applied. At a minimum, the transfer speed with SPDY can improve by about 10 percent and, in some cases, can reach numbers closer to 40 percent. Such has been the success of SPDY that in 2012 the group of Google engineers behind the project decided to create a new protocol based on the technology, and that started the story that leads us to the current HTTP/2 draft. What is a protocol? You can think of a protocol as a collection of rules that govern how information is transferred from one computer to another. Each protocol is a little different, but usually they include a header, payload and footer. The header contains the source and destination addresses and some information about the payload (type of data, size of data, etc.). The payload contains the actual information, and the footer holds some form of error detection. Some protocols also support a feature called “encapsulation, ” which lets them include other protocols inside of their payload section. You can think of it like sending a letter using snail mail. Our protocol in this case would be defined by the USPS. The letter would require a destination address in a specific format, a return address and postage. The “payload” would be the letter itself and the error detection is the seal on the envelope. If it arrives ripped and without a letter, you’d know there was a problem. Why is HTTP/2 better? In a few words: HTTP/2 loads webpages much faster, saving everyone time that otherwise would go to waste. It’s as simple as that. The example below, published by the folks over at HttpWatch , shows transfer speeds increasing more than 20 percent, and this is just one test with web servers not yet fully optimized (the technology will need some time to mature for that). In fact, improvements of around 30 percent seem to be common. Example of HTTP page load speed (above) against HTTP/2 (below) HTTP/2 improves speed mainly by creating one constant connection between the browser and the server, as opposed to a connection every time a piece of information is needed. This significantly reduces the amount of data being transferred. Plus, it transfers data in binary, a computer’s native language, rather than in text. This means your computer doesn’t have to waste time translating information into a format it understands. Other features of HTTP/2 include “multiplexing” (sending and receiving multiple messages at the same time), the use of prioritization (more important data is transferred first), compression (squeezing information into smaller chunks) and “server push, ” where a server makes an educated guess about what your next request will be and sends that data ahead of time. So when will we get to enjoy the benefits of HTTP/2? There’s no real start date for the use of HTTP/2, and many people may already be using it unknowingly. The draft submitted on February 11th will expire in six months (August 15th, to be precise). Before expiring, it has to be confirmed and become a finished document, called an “RFC, ” or a new draft with changes has to be published. As a side note, we should mention that the term “RFC” comes from “Request For Comments, ” but it’s really a name for a finalized document used by the IETF. Also, an RFC is not a requirement, but more of a suggestion of how things should be designed. (Confusing right?) However, for a protocol to work properly, everyone has to follow the same rules. The HTTP/2 technology is already baked into many web servers and browsers, even if it’s still just a draft. For example, Microsoft supports HTTP/2 on Internet Explorer under the Windows 10 Technical Preview ; Chrome also supports it (while it’s disabled by default, you can easily enable it ); and Mozilla has had it available since Firefox Beta 36. If we talk about web servers, you should know that IIS (the Windows web server) already supports HTTP/2 under Windows 10 and it’s expected that Apache and Nginx will offer support very soon (SPDY is already supported through extensions). This means that sooner, rather than later, we will all be using HTTP/2. And chances are you won’t even realize it when the switch is made unless you’re in the habit of timing load times for your favorite sites. Plus, you’ll still just see “http” or “https” in the address bar, so, life will continue as usual, but a bit faster. [Image credits: Shutterstock (Server rack); HttpWatch (Benchmark charts)] Comments Source: IETF

See original article:
What you need to know about HTTP/2

Electron microscopes stop thieves from covering their tracks

Ask the police and they’ll tell you that serial numbers seldom help catch thieves — dedicated crooks are usually smart enough to file off those digits so that stolen items can’t be linked to a crime. Researchers at the National Institute of Standards and Technology might have just found a way to recover those numbers and stop criminals in their tracks, however. Their new technique uses electron microscopes to spot damaged crystal patterns in steel, revealing characters even when they’ve been polished into oblivion. Current recovery approaches (like acid etching or electrolytic polishing) only sometimes work, and frequently provide faint clues at best — the microscope produces clear evidence that you could use to convict someone in court. It’s going to be a while before the cops are using this method. Right now, it takes three whole days to identify eight numbers. That time could shrink to an hour through optimization, though. If that happens, gun runners and burglars may have a considerably harder time escaping the long arm of the law. Unless nogoodniks get particularly creative, you’d have little trouble tracing many weapons and fenced items back to their sources. [Image credit: White/NIST] Filed under: Science Comments Via: Slashdot , Gizmodo Source: NIST

Read this article:
Electron microscopes stop thieves from covering their tracks

Chrome warns users of malware-infected websites before connecting to them

Google’s already making sure you don’t download malware, and now it’s expanding its Safe Browsing initiative. In addition to preventative warnings prior to downloading, the Chrome browser will now throw a red flag (pictured after the break) before visiting a site that may encourage you to install any malicious software. Search listings are getting marks for sites that might contain nefarious programs as well, and Mountain View says that it’s actively disabling Google Ads that “lead to sites with unwanted software.” The search giant is urging site owners to install its Webmaster Tools to help keep on top of any possible issues with a site pushing bad software to visitors, and says this’ll aid with the resolution process should that happen. Again, it’s Google working to keep its “don’t be evil” reputation in line and making the internet a safer place for everyone. After all, even the most web savvy among us have probably downloaded malware before at some point. [Image credit: Associated Press] Filed under: Internet , Google Comments Source: Inside Search blog

Read the original:
Chrome warns users of malware-infected websites before connecting to them

Verizon will speed up San Francisco data by installing ‘small cells’

Some carriers use “small cells” to boost their coverage, because these relatively tiny transmitters are 1.) a lot cheaper, and 2.) more inconspicuous than their traditional counterparts. Verizon is one of those carriers — in fact, it’s planning to install 400 small cells in certain high-traffic areas in San Francisco starting this second quarter. These devices (designed by Ericsson) will be integrated into street lamps and will generally blend into the surroundings within SF’s Financial District, SOMA, Market Street and North Beach neighborhoods. The cells do have a limitation, though: each one can only cover an area that has a 250 to 500-foot radius. That’s why for this particular rollout, Verizon plans to lay down the structure for a dense network made up of numerous small cells covering some parts of the city only. Verizon’s VP of entertainment and tech policy, Eric Reed, told GigaOm that San Francisco is a great place to prove the technology works: “Verizon’s customers, ” he said, “scarf down mobile data there like few other places in the country.” The company is expecting its LTE network speeds in those locations to be around three times faster once the installation is done by the end of 2015. According to Recode , Big Red wants all 400 units up and running before the year ends in preparation for Super Bowl 50 in February 2016, which might bring as many as a million visitors to the city. Not on Verizon? If you’re on AT&T, don’t worry — by then, Ma Bell could also be done installing over 40, 000 small cells across the United States to beef up its own coverage. [Image credit: Jon Fingas/Flickr ] Filed under: Cellphones , Wireless , Mobile , Verizon Comments Via: GigaOm

See original article:
Verizon will speed up San Francisco data by installing ‘small cells’

23andMe gets FDA approval, but only to test rare Bloom syndrome

For over a year now, 23andMe has been effectively banned from offering its US customers health-related genetic tests. The company is still selling its personal DNA kits, but the information it can provide is limited to ancestry-related reports and raw genetic data. The US Food and Drug Administration (FDA) was behind the original clampdown in 2013, but this week it’s given the company its blessing for a new test. With the fresh approval, 23andMe can now offer to look for signs of Bloom syndrome , a rare disorder which is characterized by short stature, sun-sensitive skin and increased cancer risk. While this is a specific test, rather than the broader health reports it offered before, 23andMe calls it an “important first step” to offering detailed genetic advice in the US once more. Filed under: Misc Comments Source: 23andMe

Read more here:
23andMe gets FDA approval, but only to test rare Bloom syndrome

800,000 people get bad tax info in latest Healthcare.gov snafu

Healthcare.gov just can’t catch a break — it’s been targeted by hackers and shared personal information with marketing companies in the past six months, and now it’s trying to clean up a mess for the nearly 800, 000 people it just sent incorrect tax information to. The Obama administration confirmed the issue earlier this morning, and officials promised on the Healthcare.gov blog to contact affected households via phone call and email over the next few days. Needless to say, don’t file your taxes yet if you signed up for health insurance using the site this past year. Better safe than sorry, right? Alas, the news came too late save some 50, 000 people who already filed their returns — they’ll be given instructions on how to re-file soon enough. This all might come as welcome news to people who didn’t want to sit down with a copy of TurboTax for an hour, but it could wind up being a crushing blow to affected people who really needed that tax refund soon. Officials told The New York Times they weren’t exactly sure how the glitch happened (expect an investigation to follow shortly), but 80 percent of the folks who used Healthcare.gov to sign up for insurance were in the clear as far as the IRS is concerned. Updated tax forms are expected to hit people’s mailboxes in early March, so be sure to keep your eyes peeled. Comments Via: New York Times Source: Healthcare.gov Blog

Visit link:
800,000 people get bad tax info in latest Healthcare.gov snafu