This column has written many times about the deep flaws of Digital Rights Management (DRM) - or "Digital Restrictions Management" as Richard Stallman rightly calls it - and the ridiculous laws that have been passed to "protect" it. What these effectively do is place copyright above basic rights - not just in the realm of copyright, but even in areas like privacy. Yesterday, another example of the folly of using DRM'd products came to light.
Red Hat Product Security track lots of data about every vulnerability affecting every Red Hat product. We make all this data available on our Measurement page and from time to time write various blog posts and reports about interesting metrics or trends.
One metric we’ve not written about since 2009 is the source of the vulnerabilities we fix. We want to answer the question of how did Red Hat Product Security first hear about each vulnerability?
Every vulnerability that affects a Red Hat product is given a master tracking bug in Red Hat bugzilla. This bug contains a whiteboard field with a comma separated list of metadata including the dates we found out about the issue, and the source. You can get a file containing all this information already gathered for every CVE. A few months ago we updated our ‘daysofrisk’ command line tool to parse the source information allowing anyone to quickly create reports like this one.
While OpenBSD generally prides itself on being a secure, open-source operating system and focusing more on code corectness and security rather than flashy features, it turns out a potential security bug has been living within OpenBSD for the past decade.
Phoronix German ready "FRIGN" wrote in to Phoronix this afternoon with a subject entitled, "10 year old critical bug in OpenBSD discovered." He pointed out a post today about a bug discovered in OpenBSD's polling subsystem that could allow DDoS-style attacks on servers, "a critical bug in the polling-subsystem in OpenBSD has been uncovered which allows DDoS-attacks on servers using a non-standard derivation from the POSIX-standard in marking file descriptors non-readable when they should return EOF."
It hasn't been a good year for open source. Not for its generally golden reputation for software quality and security, anyway. But in a rush to lay blame for the Bash Shellshock vulnerability (and previously for Heartbleed) some, like Roger Grimes, want to dismantle some of the cardinal tenets of open source, like the suggestion that "given enough eyeballs, all bugs are shallow."
Tor, which is capable of of all that and more, crucially blocks websites from learning any identifying information about you and circumvents censorship. It also stymies eavesdroppers from discovering what you’re doing on the Web. For those reasons, it would be a powerful addition to the arsenal of privacy tools Firefox already possesses.
The Tor Browser is already a modified version of Firefox, developed over the last decade with close communication between the Tor developers and Mozilla on issues such as security and usability.
Instead, libressl is here because of a tragic comedy of other errors. Let's start with the obvious. Why were heartbeats, a feature only useful for the DTLS protocol over UDP, built into the TLS protocol that runs over TCP? And why was this entirely useless feature enabled by default? Then there's some nonsense with the buffer allocator and freelists and exploit mitigation countermeasures, and we keep on digging and we keep on not liking what we're seeing. Bob's talk has all the gory details.
But why fork? Why not start from scratch? Why not start with some other contender? We did look around a bit, but sadly the state of affairs is that the other contenders aren't so great themselves. Not long before Heartbleed, you may recall Apple dealing with goto fail, aka the worst bug ever, but actually about par for the course.
Proprietary, (aka nonfree) software relies on an unjust development model that denies users the basic freedom to control their computers. When software's code is kept hidden, it is vulnerable not only to bugs that go undetected, but to the easier deliberate addition and maintenance of malicious features. Companies can use the obscurity of their code to hide serious problems, and it has been documented that Microsoft provides intelligence agencies with information about security vulnerabilities before fixing them.