Robots (or spiders, or crawlers) are little computer programs that search engines use to scan and index websites. Robots.txt is a little file placed on webservers to tell search engines what they should and shouldn’t index. The Internet Archive isn’t a search engine, but has historically obeyed exclusion requests from robots.txt files. But it’s changing its mind, because robots.txt is almost always crafted with search engines in mind and rarely reflects the intentions of domain owners when it comes to archiving. Over time we have observed that the robots.txt files that are geared toward search engine crawlers do not necessarily serve our archival purposes. Internet Archive’s goal is to create complete “snapshots” of web pages, including the duplicate content and the large versions of files. We have also seen an upsurge of the use of robots.txt files to remove entire domains from search engines when they transition from a live web site into a parked domain, which has historically also removed the entire domain from view in the Wayback Machine. In other words, a site goes out of business and then the parked domain is “blocked” from search engines and no one can look at the history of that site in the Wayback Machine anymore. We receive inquiries and complaints on these “disappeared” sites almost daily. A few months ago we stopped referring to robots.txt files on U.S. government and military web sites for both crawling and displaying web pages (though we respond to removal requests sent to info@archive.org). As we have moved towards broader access it has not caused problems, which we take as a good sign. We are now looking to do this more broadly. An excellent decision. To be clear, they’re ignoring robots.txt even if you explicitly identify and disallow the Internet Archive. It’s a splendid remember that nothing published on the web is ever meaningfully private, and will always go on your permanent record. 
Read the original post:
Internet Archive to ignore robots.txt directives
Today, Munich-based Lilium Aviation conducted the first test flight of its all-electric, two-seater, vertical take-off and landing (VTOL) prototype. “In a video provided by the Munich-based startup, the aircraft can be seen taking off vertically like a helicopter, and then accelerating into forward flight using wing-borne lift, ” reports The Verge. From the report: The craft is powered by 36 separate jet engines mounted on its 10-meter long wings via 12 movable flaps. At take-off, the flaps are pointed downwards to provide vertical lift. And once airborne, the flaps gradually tilt into a horizontal position, providing forward thrust. During the tests, the jet was piloted remotely, but its operators say their first manned flight is close-at-hand. And Lilium claims that its electric battery “consumes around 90 percent less energy than drone-style aircraft, ” enabling the aircraft to achieve a range of 300 kilometers (183 miles) with a maximum cruising speed of 300 kph (183 mph). “It’s the same battery that you can find in any Tesla, ” Nathen told The Verge. “The concept is that we are lifting with our wings as soon as we progress into the air with velocity, which makes our airplane very efficient. Compared to other flights, we have extremely low power consumption.” The plan is to eventually build a 5-passenger version of the jet. Read more of this story at Slashdot.