Language Selection

English French German Italian Portuguese Spanish

Openwashing and Linux Foundation Openwash

Filed under
OSS
  • Huobi’s ‘Regulator-Friendly’ Blockchain Goes Open Source

    Huobi Chain, the regulator-facing public blockchain of exchange Huobi Group, is now open source and publicly available to all developers on GitHub, the firm said Tuesday.

    Nervos, a blockchain development startup, is providing part of the technical infrastructure for the project.

    The firms are developing pluggable components for the network that could enable regulators to supervise contract deployments, asset holdings and transfers, as well as the enforcement of anti money laundering regulations, Bo Wang, a Nervos researcher, told CoinDesk.

    The components will also allow financial institutions, such as banks and regulatory agencies, to freeze assets and accounts in case of emergencies via sidechains, according to Wang.

  • Is Open Source Broken?

    The movement to develop software applications and all manner of IT services through the open source model is fundamentally rooted in the notion of community contribution, but things have shifted.

  • Managing all your enterprise's APIs with new management gateways for review
  • See you at KubeCon!

    It’s that time of year again! We’re getting ready to head on out to San Diego for KubeCon + CloudNativeCon NA. For me, KubeCon always makes for an exciting and jam-packed week. 

  • Amazon Web Services, Genesys, Salesforce Form New Open Data Model

    To accelerate digital transformation, organizations in every industry are modernizing their on-premises technologies by adopting cloud-native applications. According to the International Data Corporation (IDC), global spend on cloud computing will grow from $147 billion in 2019 to $418 billion by 2024. Almost half of that investment will be tied to technologies that help companies deliver personalized customer experiences.

    One major challenge of this shift to cloud computing is that applications are typically created with their own data models, forcing developers to build, test, and manage custom code that’s necessary to map and translate data across different systems. The process is inefficient, delays innovation, and ultimately can result in a broken customer experience.

  • The Linux Kernel Mentorship program was a life changing experience

    Operating systems, computer architectures and compilers have always fascinated me. I like to go in depth to understand the important software components we depend on! My life changed when engineers from IBM LTC (Linux Technology Center) came to my college to teach us the Linux Kernel internals. When I heard about the Linux Kernel Mentorship program, I immediately knew that I wanted to be a part of it to further fuel my passion for Linux.

    One of the project in the lists of projects available to work during the Linux Kernel Mentorship program was on “Predictive Memory Reclamation”. I really wanted the opportunity to work on the core kernel, and I began working with my mentor Khalid Aziz immediately during the application period where he gave me a task regarding the identification of anonymous memory regions for a process. I learned a lot in the application period by reading various blogs, textbooks and commit logs.

    During my mentorship period, I worked to develop a predictive memory reclamation algorithm in the Linux Kernel. The aim of the project was to reduce the amount of time the Linux kernel spends in reclaiming memory to satisfy processes requests for memory when there is memory pressure, i.e not enough to satisfy the memory allocation of a process. We implemented a predictive algorithm that can forecast memory pressure and proactively reclaim memory to ensure there is enough available for processes.

AWS and Salesforce

  • AWS, Salesforce join forces with Linux Foundation on Cloud Information Model

    Last year, Adobe, SAP and Microsoft came together and formed the Open Data Initiative. Not to be outdone, this week, AWS, Salesforce and Genesys, in partnership with The Linux Foundation, announced the Cloud Information Model.

    The two competing data models have a lot in common. They are both about bringing together data and applying a common open model to it. The idea is to allow for data interoperability across products in the partnership without a lot of heavy lifting, a common problem for users of these big companies’ software.

    Jim Zemlin, executive director at The Linux Foundation, says this project provides a neutral home for the Cloud Information model, where a community can work on the problem. “This allows for anyone across the community to collaborate and provide contributions under a central governance model. It paves the way for full community-wide engagement in data interoperability efforts and standards development, while rapidly increasing adoption rate of the community,” Zemlin explained in a statement.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

today's howtos

Events: KVM Forum 2019 and "Bar Charts for Diversity"

  • A recap of KVM Forum 2019

    The 13th KVM Forum virtualization conference took place in Lyon, France in October 2019. One might think that development may have finished on the Kernel Virtual Machine (KVM) module that was merged in Linux 2.6.20 in 2007, but this year's conference underscored the amount of work still being done, particularly on side-channel attack mitigation, I/O device assignment with VFIO and mdev, footprint reduction with micro virtual machines (VMs), and with the ability to run VMs nested within VMs. Many talks also involved the virtual machine monitor (VMM) user-space programs that use the KVM kernel module—of which QEMU is the most widely used.

  • Enhancing KVM for guest protection and security

    A key tenet in KVM is to reuse as much Linux infrastructure as possible and focus specifically on processor virtualization. Back in 2007, this meant a smaller code base and less friction with the other kernel subsystems, especially when compared with other virtualization technologies such as Xen. This led to KVM being merged into the mainline with relative ease. But now, in the era of microarchitectural vulnerabilities, the priorities have shifted, and the KVM's reliance on other kernel subsystems can be a liability. For one thing, the host kernel widens the TCB (Trusted Computing Base) and makes for a larger attack surface. In addition, kernel data structures such as the direct memory map give Linux access to guest memory even when it is not strictly necessary and make it impossible to fully enforce the principle of least privilege. In his talk "Enhancing KVM for Guest Protection and Security" (slides [PDF]) presented at KVM Forum 2019, long-time KVM contributor Jun Nakajima explained this risk and suggested some strategies to mitigate it.

  • Bar charts for diversity

    At the Linux App Summit I gave an unconference talk titles Hey guys, this conference is for everyone. The “hey guys” part refers to excluding people from a talk or making them feel uncomfortable – you can do this unintentionally, and the take-away of the talk was that you, (yes, you) can be better. I illustrated this mostly with conversational distance, a favorite topic of mine that I can demonstrate easily on stage. There’s a lot of diversity in how far people stand away from strangers, while explaining something they care about. The talk wasn’t recorded, but I’ve put the slides up. Another side of diversity can be dealt with by statistics. Since I’m a mathematician, I have a big jar of peanuts and raisins in the kitchen. Late at night I head down to the kitchen and grab ten items from the jar. Darn, all of them are raisins. What are the odds!? Well, a lot depends on whether there are any peanuts in the jar at all; what percentage is peanuts; whether I’m actually picking things randomly or not. There’s a convenient tool that Katarina Behrens pointed me to, which can help figure this out. Even if there’s only a tiny fraction of peanuts in the jar, there’s an appreciable chance of getting one (e.g. change the percentage on that page to 5% and you’ll see).

Linux on the MAG1 8.9 inch mini-laptop (Ubuntu and Fedora)

The Magic Ben MAG1 mini-laptop is a 1.5 pound notebook computer that measures about 8.2″ x 5.8″ x 0.7″ and which features an 8.9 inch touchscreen display and an Intel Core m3-8100Y processor. As I noted in my MAG1 review, the little computer also has one of the best keyboards I’ve used on a laptop this small and a tiny, but responsive trackpad below the backlit keyboard. Available from GeekBuying for $630 and up, the MAG1 ships with Windows 10, but it’s also one of the most Linux-friendly mini-laptops I’ve tested to date. [...] I did not install either operating system to local storage, so I cannot comment on sleep, battery life, fingerprint authentication, or other features that you’d only be able to truly test by fully installing Ubuntu, Fedora, or another GNU/Linux-based operating system. But running from a liveUSB is a good way to kick the tires and see if there are any obvious pain points before installing an operating system, and for the most part the two operating systems I tested look good to go. Booting from a flash drive is also pretty easy. Once you’ve prepared a bootable drive using Rufus, UNetbootin, or a similar tool, just plug it into the computer’s USB port, hit the Esc key during startup to bring up the UEFI/SETUP utility. Read more Also: Top 10 technical skills that will get you hired in 2020

Android Leftovers