Language Selection

English French German Italian Portuguese Spanish

Interview with Donald Knuth

Filed under
Interviews
OSS

Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.

Andrew Binstock: You are one of the fathers of the open-source revolution, even if you aren’t widely heralded as such. You previously have stated that you released TeX as open source because of the problem of proprietary implementations at the time, and to invite corrections to the code—both of which are key drivers for open-source projects today. Have you been surprised by the success of open source since that time?

Donald Knuth: The success of open source code is perhaps the only thing in the computer field that hasn’t surprised me during the past several decades. But it still hasn’t reached its full potential; I believe that open-source programs will begin to be completely dominant as the economy moves more and more from products towards services, and as more and more volunteers arise to improve the code.

For example, open-source code can produce thousands of binaries, tuned perfectly to the configurations of individual users, whereas commercial software usually will exist in only a few versions. A generic binary executable file must include things like inefficient "sync" instructions that are totally inappropriate for many installations; such wastage goes away when the source code is highly configurable. This should be a huge win for open source.

Yet I think that a few programs, such as Adobe Photoshop, will always be superior to competitors like the Gimp—for some reason, I really don’t know why! I’m quite willing to pay good money for really good software, if I believe that it has been produced by the best programmers.

Remember, though, that my opinion on economic questions is highly suspect, since I’m just an educator and scientist. I understand almost nothing about the marketplace.

More Here




Good link

Thanks for that. Knuth is one of my inspirers.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Ultimate Boot CD 5.3.4 Officially Released, Includes PhotoRec 7.0 and TestDisk 7.0

Ultimate Boot CD, an ISO image that gathers together all the necessary tools for helping users with advanced system repair tasks and general system maintenance, reached version 5.3.4. Read more Also: MakuluLinux 9 Xfce Is Available for Download, Based on Ubuntu 14.04.2 LTS and Xfce 4.12 Korora MATE Has Returned and It Looks Gorgeous, Based on Fedora 21

GNOME's Mutter Updated for GNOME 3.18 with More Wayland Improvements

Mutter, the default window manager and compositor of the acclaimed GNOME desktop environment received an update as part of the first development release of the upcoming GNOME 3.18, due for release on October 23, 2015. Read more

Ubuntu 16.04 LTS Won't Have Unity 8 by Default, the Community Will Decide

Canonical wanted to have Unity 8 and Mir ready for Ubuntu 16.04 LTS in order to provide them by default, but it looks like that is not going to happen. Instead, the community will be able to choose the default desktop they want for that particular LTS release. Read more

Linux Widens HPC Goalposts

It is well known that the term “high performance computing” (HPC) originally describes the use of parallel processing for running advanced application programs efficiently, reliably and quickly. The term applies especially to systems that function above a teraflop or 10^12 floating-point operations per second, and is also often used as a synonym for supercomputing. Technically a supercomputer is a system that performs at or near the currently highest operational rate for computers. To increase systems performance, over time the industry has moved from uni-processor to SMP to distributed-memory clusters, and finally to multicore and manycore chips. However, for a growing number of users and vendors, HPC today refers not to cores, cycles, or FLOPS but to discovery, efficiency, or time to market. Some years ago, IDC came up with the interpretation of HPC to High Productivity Computing, highlighting the idea that HPC provides a more effective and scalable productivity to customers, and this term fits really well for most commercial customers. Read more