Language Selection

English French German Italian Portuguese Spanish

Sci/Tech

Scientific Linux 7.3 Released

Filed under
Red Hat
Sci/Tech
  • Scientific Linux 7.3 Officially Released, Based on Red Hat Enterprise Linux 7.3

    After two Release Candidate (RC) development builds, the final version of the Scientific Linux 7.3 operating system arrived today, January 26, 2017, as announced by developer Pat Riehecky.

    Derived from the freely distributed sources of the commercial Red Hat Enterprise Linux 7.3 operating system, Scientific Linux 7.3 includes many updated components and all the GNU/Linux/Open Source technologies from the upstream release.

    Of course, all of Red Hat Enterprise Linux's specific packages have been removed from Scientific Linux, which now supports Scientific Linux Contexts, allowing users to create local customization for their computing needs much more efficiently than before.

  • Scientific Linux 7.3 Released

    For users of Scientific Linux, the 7.3 release is now available based off Red Hat Enterprise Linux 7.3.

The code that took America to the moon was just published to GitHub, and it’s like a 1960s time capsule

Filed under
OSS
Sci/Tech

When programmers at the MIT Instrumentation Laboratory set out to develop the flight software for the Apollo 11 space program in the mid-1960s, the necessary technology did not exist. They had to invent it.

They came up with a new way to store computer programs, called “rope memory,” and created a special version of the assembly programming language. Assembly itself is obscure to many of today’s programmers—it’s very difficult to read, intended to be easily understood by computers, not humans. For the Apollo Guidance Computer (AGC), MIT programmers wrote thousands of lines of that esoteric code.

Read more

Open source machine learning tools as good as humans in detecting cancer cases

Filed under
OSS
Sci/Tech
  • Open source machine learning tools as good as humans in detecting cancer cases

    Machine learning has come of age in public health reporting according to researchers from the Regenstrief Institute and Indiana University School of Informatics and Computing at Indiana University-Purdue University Indianapolis. They have found that existing algorithms and open source machine learning tools were as good as, or better than, human reviewers in detecting cancer cases using data from free-text pathology reports. The computerized approach was also faster and less resource intensive in comparison to human counterparts.

  • Machine learning can help detect presence of cancer, improve public health reporting

    To support public health reporting, the use of computers and machine learning can better help with access to unstructured clinical data--including in cancer case detection, according to a recent study.

FOSS and Artificial Intelligence

Filed under
OSS
Sci/Tech

RoboPhone: Sharp to Sell Real Android Phones in Japan

Filed under
Android
Sci/Tech

The Osaka-based electronics maker said Tuesday it would introduce a new mobile communication device in 2016 that is a tiny android robot. It will come with features of a smartphone including email, Internet connectivity, camera and a 2-inch display. Still to be decided is whether the device will use Google Inc.’s Android mobile operating system or another operating system.

Read more

Accelerating Scientific Analysis with the SciDB Open Source Database System

Filed under
OSS
Sci/Tech

Science is swimming in data. And, the already daunting task of managing and analyzing this information will only become more difficult as scientific instruments — especially those capable of delivering more than a petabyte (that’s a quadrillion bytes) of information per day — come online.

Tackling these extreme data challenges will require a system that is easy enough for any scientist to use, that can effectively harness the power of ever-more-powerful supercomputers, and that is unified and extendable. This is where the Department of Energy’s (DOE) National Energy Research Scientific Computing Center’s (NERSC’s) implementation of SciDB comes in.

Read more

Scientific Linux 6.7 Officially Released, Based on Red Hat Enterprise Linux 6.7

Filed under
Red Hat
Sci/Tech

The Scientific Linux team, through Pat Riehecky, has had the great pleasure of announcing the release and immediate availability for download of the Scientific Linux 6.7 computer operating system.

Read more

Syndicate content

More in Tux Machines

How to Contribute to the Fight Against COVID-19 With Your Linux System

Want to contribute to the research on coronavirus? You don’t necessarily have to be a scientist for this. You may contribute with part of your computer’s computing power thanks to Rosetta@home project. Read more

Raspberry Pi 4 as Desktop Computer: Is It Really Viable?

There’s little doubt that the Raspberry Pi 4 is significantly more powerful than its predecessors. Its based on the faster ARM Cortex-A72 microarchitecture and has four cores pegged at marginally-higher clock speeds. The graphics subsystem is significantly beefed up as well, running at twice the maximum stock clocks as the outgoing model. Everything about it makes it a viable desktop replacement. But is it really good enough to replace your trusty old desktop? I spent three weeks with the 8GB version of the Pi 4 to answer that million-dollar question. Read more

10 Linux Distributions and Their Targeted Users

As a free and open-source operating system, Linux has spawned several distributions over time, spreading its wings to encompass a large community of users. From desktop/home users to Enterprise environments, Linux has ensured that each category has something to be happy about. [...] Debian is renowned for being a mother to popular Linux distributions such as Deepin, Ubuntu, and Mint which have provided solid performance, stability, and unparalleled user experience. The latest stable release is Debian 10.5, an update of Debian 10 colloquially known as Debian Buster. Note that Debian 10.5 does not constitute a new version of Debian Buster and is only an update of Buster with the latest updates and added software applications. Also included are security fixes that address pre-existing security issues. If you have your Buster system, there’s no need to discard it. Simply perform a system upgrade using the APT package manager. The Debian project provides over 59,000 software packages and supports a wide range of PCs with each release encompassing a broader array of system architectures. It strives to strike a balance between cutting edge technology and stability. Debian provides 3 salient development branches: Stable, Testing, and Unstable. The stable version, as the name suggests is rock-solid, enjoys full security support but unfortunately, does not ship with the very latest software applications. Nevertheless, It is ideal for production servers owing to its stability and reliability and also makes the cut for relatively conservative desktop users who don’t really mind having the very latest software packages. Debian Stable is what you would usually install on your system. Debian Testing is a rolling release and provides the latest software versions that are yet to be accepted into the stable release. It is a development phase of the next stable Debian release. It’s usually fraught with instability issues and might easily break. Also, it doesn’t get its security patches in a timely fashion. The latest Debian Testing release is Bullseye. The unstable distro is the active development phase of Debian. It is an experimental distro and acts as a perfect platform for developers who are actively making contributions to the code until it transitions to the ‘Testing’ stage. Overall, Debian is used by millions of users owing to its package-rich repository and the stability it provides especially in production environments. Read more

LWN on Linux and Linux Foundation Bits

  • Modernizing the tasklet API

    Tasklets offer a deferred-execution method in the Linux kernel; they have been available since the 2.3 development series. They allow interrupt handlers to schedule further work to be executed as soon as possible after the handler itself. The tasklet API has its shortcomings, but it has stayed in place while other deferred-execution methods, including workqueues, have been introduced. Recently, Kees Cook posted a security-inspired patch set (also including work from Romain Perier) to improve the tasklet API. This change is uncontroversial, but it provoked a discussion that might lead to the removal of the tasklet API in the (not so distant) future. The need for tasklets and other deferred execution mechanisms comes from the way the kernel handles interrupts. An interrupt is (usually) caused by some hardware event; when it happens, the execution of the current task is suspended and the interrupt handler takes the CPU. Before the introduction of threaded interrupts, the interrupt handler had to perform the minimum necessary operations (like accessing the hardware registers to silence the interrupt) and then call an appropriate deferred-work mechanism to take care of just about everything else that needed to be done. Threaded interrupts, yet another import from the realtime preemption work, move the handler to a kernel thread that is scheduled in the usual way; this feature was merged for the 2.6.30 kernel, by which time tasklets were well established. An interrupt handler will schedule a tasklet when there is some work to be done at a later time. The kernel then runs the tasklet when possible, typically when the interrupt handler finishes, or the task returns to the user space. The tasklet callback runs in atomic context, inside a software interrupt, meaning that it cannot sleep or access user-space data, so not all work can be done in a tasklet handler. Also, the kernel only allows one instance of any given tasklet to be running at any given time; multiple different tasklet callbacks can run in parallel. Those limitations of tasklets are not present in more recent deferred work mechanisms like workqueues. But still, the current kernel contains more than a hundred users of tasklets. Cook's patch set changes the parameter type for the tasklet's callback. In current kernels, they take an unsigned long value that is specified when the tasklet is initialized. This is different from other kernel mechanisms with callbacks; the preferred way in current kernels is to use a pointer to a type-specific structure. The change Cook proposes goes in that direction by passing the tasklet context (struct tasklet_struct) to the callback. The goal behind this work is to avoid a number of problems, including a need to cast from the unsigned int to a different type (without proper type checking) in the callback. The change allows the removal of the (now) redundant data field from the tasklet structure. Finally, this change mitigates the possible buffer overflow attacks that could overwrite the callback pointer and the data field. This is likely one of the primary objectives, as the work was first posted (in 2019) on the kernel-hardening mailing list.

  • Android kernel notes from LPC 2020

    Todd Kjos started things off by introducing the Android Generic Kernel Image (GKI) effort, which is aimed at reducing Android's kernel-fragmentation problem in general. It is the next step for the Android Common Kernel, which is based on the mainline long-term support (LTS) releases with a number of patches added on top. These patches vary from Android-specific, out-of-tree features to fixes cherry-picked from mainline releases. The end result is that the Android Common Kernel diverges somewhat from the LTS releases on which it is based. From there, things get worse. Vendors pick up this kernel and apply their own changes — often significant, core-kernel changes — to create a vendor kernel. The original-equipment manufacturers begin with that kernel when creating a device based on the vendor's chips, but then add changes of their own to create the OEM kernel that is shipped with a device to the consumer. The end result of all this patching is that every device has its own kernel, meaning that there are thousands of different "Android" kernels in use. There are a lot of costs to this arrangement, Kjos said. Fragmentation makes it harder to ensure that all devices are running current kernels — or even that they get security updates. New platform releases require a new kernel, which raises the cost of upgrading an existing device to a new Android version. Fixes applied by vendors and OEMs often do not make it back into the mainline, making things worse for everybody. The Android developers would like to fix this fragmentation problem; the path toward that goal involves providing a single generic kernel in binary form (the GKI) that all devices would use. Any vendor-specific or device-specific code that is not in the mainline kernel will need to be shipped in the form of kernel modules to be loaded into the GKI. That means that Android is explicitly encouraging vendor modules, Kjos said; the result is a cleaner kernel without the sorts of core-kernel modifications that ship on many devices now. This policy has already resulted in more vendors actively working to upstream their code. That code often does not take the form that mainline developers would like to see; some of it is just patches exporting symbols. That has created some tension in the development community, he said.

  • Vibrant Networking, Edge Open Source Development On Full Display at Open Networking & Edge Summit

    The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today marked significant progress in the open networking and edge spaces. In advance of the Open Networking and Edge Summit happening September 28-30, Linux Foundation umbrella projects LF Edge and LF Networking demonstrate recent achievements highlighting trends that set the stage for next-generation technology.

  • Vibrant Networking, Edge Open Source Development On Full Display at Open Networking & Edge Summit

    The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today marked significant progress in the open networking and edge spaces. In advance of the Open Networking and Edge Summit happening September 28-30, Linux Foundation umbrella projects LF Edge and LF Networking demonstrate recent achievements highlighting trends that set the stage for next-generation technology. [...] “We are thrilled to announce a number of milestones across our networking and edge projects, which will be on virtual display at ONES next week,” said Arpit Joshipura, general manager, Networking, Edge and IOT, at the Linux Foundation. “Indicative of what’s coming next, our communities are laying the groundwork for markets like cloud native, 5G, and edge to explode in terms of open deployments.”