Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 6 hours 39 min ago

Bits from Debian: DebConf19 closes in Curitiba and DebConf20 dates announced

Saturday 27th of July 2019 09:40:00 PM

Today, Saturday 27 July 2019, the annual Debian Developers and Contributors Conference came to a close. Hosting more than 380 attendees from 50 different countries over a combined 145 event talks, discussion sessions, Birds of a Feather (BoF) gatherings, workshops, and activities, DebConf19 was a large success.

The conference was preceded by the annual DebCamp held 14 July to 19 July which focused on individual work and team sprints for in-person collaboration toward developing Debian and host to a 3-day packaging workshop where new contributors were able to start on Debian packaging.

The Open Day held on July 20, with over 250 attendees, enjoyed presentations and workshops of interest to the wider audience, a Job Fair with booths from several of the DebConf19 sponsors and a Debian install fest.

The actual Debian Developers Conference started on Sunday 21 July 2019. Together with plenaries such as the the traditional 'Bits from the DPL', lightning talks, live demos and the announcement of next year's DebConf (DebConf20 in Haifa, Israel), there were several sessions related to the recent release of Debian 10 buster and some of its new features, as well as news updates on several projects and internal Debian teams, discussion sessions (BoFs) from the language, ports, infrastructure, and community teams, along with many other events of interest regarding Debian and free software.

The schedule was updated each day with planned and ad-hoc activities introduced by attendees over the course of the entire conference.

For those who were not able to attend, most of the talks and sessions were recorded for live streams with videos made, available through the Debian meetings archive website. Almost all of the sessions facilitated remote participation via IRC messaging apps or online collaborative text documents.

The DebConf19 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf20 will be held in Haifa, Israel, from 23 August to 29 August 2020. As tradition follows before the next DebConf the local organizers in Israel will start the conference activites with DebCamp (16 August to 22 August), with particular focus on individual and team work toward improving the distribution.

DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Anti-Harassment team) are available to help so both on-site and remote participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf19 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf19, particularly our Platinum Sponsors: Infomaniak, Google and Lenovo.

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/.

About Infomaniak

Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

About Google

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

About Lenovo

As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

Contact Information

For further information, please visit the DebConf19 web page at https://debconf19.debconf.org/ or send mail to press@debian.org.

Jonathan Wiltshire: Haifa

Saturday 27th of July 2019 08:53:12 PM

Ben Hutchings: Debian LTS work, July 2019

Saturday 27th of July 2019 01:40:05 PM

I was assigned 18.5 hours of work by Freexian's Debian LTS initiative and worked all those hours this month.

I prepared and released Linux 3.16.70 with various fixes from upstream. I then rebased jessie's linux package on this. Later in the month, I picked the fix for CVE-2019-13272, uploaded the package, and issued DLA-1862-1. I also released Linux 3.16.71 with just that fix.

I backported the latest security update for Linux 4.9 from stretch to jessie and issued DLA-1863-1.

Ben Hutchings: Talk: What's new in the Linux kernel (and what's missing in Debian)

Saturday 27th of July 2019 01:24:34 PM

As planned, I presented my annual talk about Linux kernel changes at DebConf on Monday—remotely. (I think this was a DebConf first.)

A video recording is already available (high quality, low quality). The slides are linked from my talks page and from the DebConf event page.

Thanks again to the video team for taking the time to work out video and audio routing with me.

Laura Arjona Reina: A new home for Debian in the Mastodon / ActivityPub fediverse: follow @debian@framapiaf.org (and possible future moves)

Saturday 27th of July 2019 12:13:02 PM
TL;DR

Recent events in the fediverse in general and related to fosstodon.org instance in particular have made me rethink the place where I’d like to handle the @debian account in the Mastodon/GNU Social/ActivityPub fediverse.
I couldn’t decide a “final” place yet, but I’m exploring options (including selfhosting).

For now, I’ve moved the account to @debian@framapiaf.org – Please follow @debian there. Thank you Framasoft for administering and providing the service.

(Some) context

Note: This paragraph is updated (2019-07-28), thanks to the people pointing to me that it was unclear, I hope this new wording and details clarifies more my position.

For a summary of what happened plus some thoughts thrown to the table you can read this article by Brandon ‘LinuxLiaison’ Nolet and this one by ’emsenn’. I’ve been thinking about all this, and I decided to leave the fosstodon.org instance because I believe there are underlying issues that the provided apology does not solve, and do not help to foster the welcoming, diverse and inclusive environment where I’d like to be, for me, and for this non-official debian account. There is more info out there and several different personal opinions, so I guess people interested in learn more about the context can find by themselves.

Roadmap
  • Starting 2019-07-28 I’ll post the micronews.debian.org RSS feed in @debian@framapiaf.org
  • I will continue posting the micronews.debian.org RSS feed to @debian@fosstodon.org too, to give time for this news to spread and people to move.
  • I will fix a toot to this blog post in both accounts, because  @debian@framapiaf.org may be temporary (or not. we’ll see).
  • On 1 September I will stop sending the micronews feed to @debian@fosstodon.org  and I will only post a toot to this blog post from time to time.
  • On 1 October I will stop posting anything from @debian@fosstodon.org and close the account or make it dormant or whatever.
  • I don’t think I will take a new decision of a final or future move before October. I will try to put time on exploring options from September until the end of the year. Depending on my availability and the available help from Debian friends, the final home of the @debian account in the fediverse will be settled soon or later… you know, “when it’s ready”.
Thanks for understanding, and for your help

All this caught me in a “bad moment” (very busy with Debian and non-Debian stuff + personally, lower energy than usually). I apologise for not giving much details and also for not reacting quicker.

I appreciate if you can spread this news so people follow the new account easily.
I would like to thank the friends that gave me some heads up about what was happening, and helped me to understand in a time where I could have not much time to read everything, and also were patient to wait for me to take a decision.

Reminder: the account, wherever it’s hosted, is a mirror of micronews.debian.org

Finally, I would like to remind everybody that the @debian account in the fediverse, wherever is hosted, is not official. It just posts the RSS feed provided by https://micronews.debian.org, which is one the official source of news about Debian. Micronews includes short news produced or selected by the Debian Publicity team and also broadcasts links to the longer official announcements posted in the other official channels: the Debian blog, the Debian website or the Debian announce and news mailing lists.

Enrico Zini: Opinion Sort

Saturday 27th of July 2019 09:29:21 AM

«Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic.

This discrepancy is common in public life, where people are frequently impelled— whether by their own propensities or by the demands of others—to speak extensively about matters of which they are to some degree ignorant.

Closely related instances arise from the widespread conviction that it is the responsibility of a citizen in a democracy to have opinions about everything, or at least everything that pertains to the conduct of his country’s affairs.

The lack of any significant connection between a person’s opinions and his apprehension of reality will be even more severe, needless to say, for someone who believes it his responsibility, as a conscientious moral agent, to evaluate events and conditions in all parts of the world.»

(From Harry G. Frankfurt's On Bullshit)

Opinion Sort

In a world where it is more important to have a quick opinion than a thorough understanding, I propose this novel sorting algoritihm.

def opinion_sort(list: List[Any], post: Callable[List]): """ list: a list of elements to sort in place post: a callable that requires a sorted list as input and does proper error checking, as they should do """ if list[0] > list[1]: swap(list[0], list[1]) while True: try: # Assert opinion: "It is a sorted list!" post(list) except NotSortedException as e: # Someone disagrees, and they have a good point swap(list[e.unsorted_idx_1], list[e.unsorted_idx_2]) else: break # The list is now sorted, and the callable has to agree

This algorithm is the most efficient sorting algorithm, because it can sort a list by only looking at the first two elements.

Enrico Zini: Opinion Sort

Saturday 27th of July 2019 09:29:21 AM

«Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic.

This discrepancy is common in public life, where people are frequently impelled— whether by their own propensities or by the demands of others—to speak extensively about matters of which they are to some degree ignorant.

Closely related instances arise from the widespread conviction that it is the responsibility of a citizen in a democracy to have opinions about everything, or at least everything that pertains to the conduct of his country’s affairs.

The lack of any significant connection between a person’s opinions and his apprehension of reality will be even more severe, needless to say, for someone who believes it his responsibility, as a conscientious moral agent, to evaluate events and conditions in all parts of the world.»

(From Harry G. Frankfurt's On Bullshit)

Opinion Sort

In a world where it is more important to have a quick opinion than a thorough understanding, I propose this novel sorting algoritihm.

def opinion_sort(list: List[Any], post: Callable[List]): """ list: a list of elements to sort in place post: a callable that requires a sorted list as input and does proper error checking, as they should do """ if list[0] > list[1]: swap(list[0], list[1]) while True: try: # Assert opinion: "It is a sorted list!" post(list) except NotSortedException as e: # Someone disagrees, and they have a good point swap(list[e.unsorted_idx_1], list[e.unsorted_idx_2]) else: break # The list is now sorted, and the callable has to agree

This algorithm is the most efficient sorting algorithm, because it can sort a list by only looking at the first two elements.

Eddy Petrișor: Rust: How do we teach "Implementing traits in no_std for generics using lifetimes" without students going mad?

Friday 26th of July 2019 11:49:30 PM
Update 2019-Jul-27: In the code below my StackVec type was more complicated than it had to be, I had been using StackVec<'a, &'a mut T> instead of StackVec<'a, T> where T: 'a. I am unsure how I ended up making the type so complicated, but I suspect the lifetimes mismatch errors and the attempt to implement IntoIterator were the reason why I made the original mistake.

Corrected code accordingly.



I'm trying to go through Sergio Benitez's CS140E class and I am currently at Implementing StackVec. StackVec is something that currently, looks like this:

/// A contiguous array type backed by a slice.
///
/// `StackVec`'s functionality is similar to that of `std::Vec`. You can `push`
/// and `pop` and iterate over the vector. Unlike `Vec`, however, `StackVec`
/// requires no memory allocation as it is backed by a user-supplied slice. As a
/// result, `StackVec`'s capacity is _bounded_ by the user-supplied slice. This
/// results in `push` being fallible: if `push` is called when the vector is
/// full, an `Err` is returned.
#[derive(Debug)]
pub struct StackVec<'a, T: 'a> {
    storage: &'a mut [T],
    len: usize,
    capacity: usize,
}The initial skeleton did not contain the derive Debug and the capacity field, I added them myself.

Now I am trying to understand what needs to happens behind:
  1. IntoIterator
  2. when in no_std
  3. with a custom type which has generics
  4. and has to use lifetimes
I don't now what I'm doing, I might have managed to do it:

pub struct StackVecIntoIterator<'a, T: 'a> {
    stackvec: StackVec<'a, T>,
    index: usize,
}

impl<'a, T: Clone + 'a> IntoIterator for StackVec<'a, &'a mut T> {
    type Item = &'a mut T;
    type IntoIter = StackVecIntoIterator<'a, T>;

    fn into_iter(self) -> Self::IntoIter {
        StackVecIntoIterator {
            stackvec: self,
            index: 0,
        }
    }
}

impl<'a, T: Clone + 'a> Iterator for StackVecIntoIterator<'a, T> {
    type Item = &'a mut T;

    fn next(&mut self) -> Option {
        let result = self.stackvec.pop();
        self.index += 1;

        result
    }
}
Corrected code as of 2019-Jul-27:
pub struct StackVecIntoIterator<'a, T: 'a> {
    stackvec: StackVec<'a, T>,
    index: usize,
}

impl<'a, T: Clone + 'a> IntoIterator for StackVec<'a, T> {
    type Item = T;
    type IntoIter = StackVecIntoIterator<'a, T>;

    fn into_iter(self) -> Self::IntoIter {
        StackVecIntoIterator {
            stackvec: self,
            index: 0,
        }
    }
}

impl<'a, T: Clone + 'a> Iterator for StackVecIntoIterator<'a, T> {
    type Item = T;

    fn next(&mut self) -> Option {
        let result = self.stackvec.pop().clone();
        self.index += 1;

        result
    }
}


I was really struggling to understand what should the returned iterator type be in my case, since, obviously, std::vec is out because a) I am trying to do a no_std implementation of something that should look a little like b) a std::vec.

That was until I found this wonderful example on a custom type without using any already implemented Iterator, but defining the helper PixelIntoIterator struct and its associated impl block:

struct Pixel {
    r: i8,
    g: i8,
    b: i8,
}

impl IntoIterator for Pixel {
    type Item = i8;
    type IntoIter = PixelIntoIterator;

    fn into_iter(self) -> Self::IntoIter {
        PixelIntoIterator {
            pixel: self,
            index: 0,
        }

    }
}

struct PixelIntoIterator {
    pixel: Pixel,
    index: usize,
}

impl Iterator for PixelIntoIterator {
    type Item = i8;
    fn next(&mut self) -> Option {
        let result = match self.index {
            0 => self.pixel.r,
            1 => self.pixel.g,
            2 => self.pixel.b,
            _ => return None,
        };
        self.index += 1;
        Some(result)
    }
}


fn main() {
    let p = Pixel {
        r: 54,
        g: 23,
        b: 74,
    };
    for component in p {
        println!("{}", component);
    }
}The part in bold was what I was actually missing. Once I had that missing link, I was able to struggle through the generics part.

Note that, once I had only one new thing, the generics - luckly the lifetime part seemed it to be simply considered part of the generic thing - everything was easier to navigate.


Still, the fact there are so many new things at once, one of them being lifetimes - which can not be taught, only experienced @oli_obk - makes things very confusing.

Even if I think I managed it for IntoIterator, I am similarly confused about implementing "Deref for StackVec" for the same reasons.

I think I am seeing on my own skin what Oliver Scherer was saying about big infodumps at once at the beginning is not the way to go. I feel that if Sergio's class was now in its second year, things would have improved. OTOH, I am now very curious how does your curriculum look like, Oli?

All that aside, what should be the signature of the impl? Is this OK?

impl<'a, T: Clone + 'a> Deref for StackVec<'a, &'a mut T> {
    type Target = T;

    fn deref(&self) -> &Self::Target;
}Trivial examples like wrapper structs over basic Copy types u8 make it more obvious what Target should be, but in this case it's so unclear, at least to me, at this point. And because of that I am unsure what should the implementation even look like.

I don't know what I'm doing, but I hope things will become clear with more exercise.

Jonathan Wiltshire: Daisy and George at Debian’s Conference Dinner

Friday 26th of July 2019 11:31:19 PM

Daisy and George have spent the week at the Debian Conference. Tonight is the conference dinner.

The menu is more complicated than usual, because it is in both Portuguese and English.

Daisy and George have made many friends this week.

Dinner is over. It’s time for some serious work.

Giovanni Mascellani: My take on OpenPGP best practices

Friday 26th of July 2019 09:30:00 PM

After having seen a few talks at DebConf on GnuPG and related things, I would like to document here how I currently manage my OpenPGP keys, in the hope they can be useful for other people or for discussion. This is not a tutorial, meaning that I do not give you the commands to do what I am saying, otherwise it would become way too long. If there is the need to better document how to implement these best practices, I will try to write another post.

I actually do have two OpenPGP certificates, D9AB457E and E535FA6D. The first one is RSA 4096 and the second one is Curve25519. The reason for having two certificates is algorithm diversity: I don't know which one between RSA and Curve25519 will be the first to be considered less secure or insecure, therefore I would like to be ready for both scenarios. Having two certificates already allows me to do signature hunting on both, in such a way that it is easy to transition from one to the other as soon as there is the need.

The key I currently use is the RSA one, which is also the one available in the Debian keyring.

(If you search on the keyservers you will find many other keys with my name; they are obsolete, meant for my internal usage or otherwise not in use; just ignore them!)

Even if the two primary keys are different, their subkeys are the same (apart from some older cruft now revoked), meaning that they have the same key material. This is useful, because I can use the same hardware token for both keys (most hardware token only have three key slot, one for each subkey capability, so to have two primary keys ready for use you need two tokens, unless the two keys share their subkeys). I have one subkey for each subkey capability (sign, encrypt and authentication), wich are Curve25519 keys and are stored in a Nitrokey Start token. I also have, but tend to not use, one RSA subkey for each capability, which are stored on a OpenPGP card. Thanks to some date tweaking, both certificates are configured in such a way that Curve25519 subkeys are always preferred over RSA subkeys, but I also want to retain the RSA keys for corner cases where Curve25519 is not available.

The reason to choose Curve25519 over RSA for default usage is that they are faster and generate smaller signatures. I have no idea which one is considered more secure, but I believe that neither of them is the weak link in my security chain.

The primary keys have an expiration date, which is always my birthday. Such choice is for remembering, a couple of months in advance, to extend it of one year, so that the key remains valid. Choosing the update interval here is of course a compromise between security and convenience. One year seems fine. I see no advantage in setting an expiration date on subkeys, since I can always use the primary key to revoke them. It might be useful to set an expiration date if I had a subkey rotation strategy, but I don't, and unfortunately with OpenPGP is a bit difficult to have one, since all subkeys are stored forever in the certificate, which would quickly become bloated.

The primary keys' private material is stored in a external disk that is normally disconnected from any computer, so completely inaccessible from the Internet. I connect it to my computer when I need to do operations that require the primary key, like signing other keys, managing subkeys or extending the key validity. This setup is not ideal, because it would be better to only connect the external storage to a machine that is always offline (and therefore is less likely to have been compromised). But that would require maintaining another machine, and as usual one has to compromise between security and convenience. Also, that external disk also contains other data, so it gets connected to my laptop also for other operations than working with OpenPGP certificates. I could improve here, but it is still better than bringing the primary key as a file in my computer.

I also have copies of my keys' private material (both for primary keys and subkeys) and revokation certificates on a bunch of paper sheets hidden somewhere in my house, just in case the external disk should fail. A common tool for this step is paperkey, although I did follow this tutorial to encode the secret key in a number of data matrices.

Overall, while my setup is perfectible, I believe it also reasonably secure for my use case, and quite convenient to use.

Steinar H. Gunderson: Vote craziness

Friday 26th of July 2019 06:00:54 PM

Of all the things I've seen in Debian, spamming DDs with a vote that's not a vote (“which of these terrible things the DPL did are the worst causes of everything that's wrong in the world”) has to be among the craziest. (I won't link to it here.)

Michael Prokop: Debian buster: changes in coreutils #newinbuster

Friday 26th of July 2019 04:45:15 PM

Debian buster is there, and similar to what we had with #newinwheezy, #newinjessie and #newinstretch it’s time for #newinbuster!

One package that isn’t new but its tools are used by many of us is coreutils, providing many essential system utilities. We have coreutils v8.26-3 in Debian/stretch and coreutils v8.30-3 in Debian/buster. Compared to the changes between jessie and stretch there are no new tools, but there are some new options available that I’d like to point out.

New features/options

b2sum + md5sum + sha1sum + sha224sum + sha256sum + sha384sum + sha512sum (compute and check message digest):

-z, --zero end each output line with NUL, not newline, nd disable file name escaping

cp (copy files and directories):

Use --reflink=never to ensure a standard copy is performed.

env (run a program in a modified environment):

-C, --chdir=DIR change working directory to DIR -S, --split-string=S process and split S into separate arguments; used to pass multiple arguments on shebang lines -v, --debug print verbose information for each processing step

ls (list directory contents), dir + vdir (list directory contents):

--hyperlink[=WHEN] hyperlink file names; WHEN can be 'always' (default if omitted), 'auto', or 'never'

This –hyperlink option is especially worth mentioning if you’re using a recent terminal emulator (especially based on VTE), see Hyperlinks (a.k.a. HTML-like anchors) in terminal emulators for further information.

rm (remove files or directories):

--preserve-root=all do not remove '/' (default); with 'all', reject any command line argument on a separate device from its parent

split (split a file into pieces):

-x use hex suffixes starting at 0, not alphabetic --hex-suffixes[=FROM] same as -x, but allow setting the start value

timeout (run a command with a time limit):

-v, --verbose diagnose to stderr any signal sent upon timeout Changes:

date (print or set the system date and time):

--rfc-2822 (AKA -R) was renamed into --rfc-email, while --rfc-2822 is still supported

nl (write each FILE to standard output, with line numbers added):

Old default options: -bt -fn -hn -i1 -l1 -nrn -sTAB -v1 -w6 New default options: -bt -d'\:' -fn -hn -i1 -l1 -n'rn' -s<tab> -v1 -w6

Michael Prokop: Debian buster: changes in util-linux #newinbuster

Friday 26th of July 2019 04:43:03 PM

Debian buster is there, and similar to what we had with #newinwheezy, #newinjessie and #newinstretch it’s time for #newinbuster!

Update on 2019-07-26 22:55 UTC: Cyril Brulebois pointed out, that findmnt (find a filesystem) was available in Debian/stretch already as part of the mount package, updated the blog post accordingly

One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.29.2-1+deb9u1 in Debian/stretch and util-linux v2.33.1-0.1 in Debian/buster. There are many new options available and we also have a few new tools available.

Tools that have been taken over from / moved to other packages
  • cfdisk + fdisk + sfdisk (tools to display or manipulate a disk partition table) were moved from util-linux to fdisk
  • findmnt (find a filesystem) is no longer shipped via the mount binary package (of util-linux source package) but part of the util-linux binary package itself nowadays
  • setpriv (run a program with different Linux privilege settings) is no longer shipped as separate binary package of util-linux but part of the util-linux binary package itself nowadays
  • su (change user ID or become superuser) was moved from login package (kudos to Andreas Henriksson for this!)
Deprecated / removed tools

Tools that are no longer shipped with util-linux as of Debian/buster:

  • line binary (copies one line (up to a newline) from standard input to standard output), the head binary is its suggested replacement
  • pg binary (browse pagewise through text files), it’s marked deprecated in POSIX since 1997
  • tailf binary (follow the growth of a log file), it was deprecated in 2017 and `tail -f` from coreutils works fine
  • tunelp binary (set various parameters for the lp device), parallel port printers are suspected to be extinct by now
New tools

blkzone (run zone command on a device):

Usage: blkzone <command> [options] <device> Run zone command on the given block device. Commands: report Report zone information about the given device reset Reset a range of zones. Options: -o, --offset <sector> start sector of zone to act (in 512-byte sectors) -l, --length <sectors> maximum sectors to act (in 512-byte sectors) -c, --count <number> maximum number of zones -v, --verbose display more details -h, --help display this help -V, --version display version For more details see blkzone(8).

chmem (configure memory, set a particular size or range of memory online or offline):

Usage: chmem [options] [SIZE|RANGE|BLOCKRANGE] Set a particular size or range of memory online or offline. Options: -e, --enable enable memory -d, --disable disable memory -b, --blocks use memory blocks -z, --zone <name> select memory zone (see below) -v, --verbose verbose output -h, --help display this help -V, --version display version Supported zones: DMA DMA32 Normal Highmem Movable Device For more details see chmem(8).

choom (display and adjust OOM-killer score):

Usage: choom [options] -p pid choom [options] -n number -p pid choom [options] -n number command [args...]] Display and adjust OOM-killer score. Options: -n, --adjust <num> specify the adjust score value -p, --pid <num> process ID -h, --help display this help -V, --version display version For more details see choom(1).

fincore (count pages of file contents in core):

Usage: fincore [options] file... Options: -J, --json use JSON output format -b, --bytes print sizes in bytes rather than in human readable format -n, --noheadings don't print headings -o, --output <list> output columns -r, --raw use raw output format -h, --help display this help -V, --version display version Available output columns: PAGES file data resident in memory in pages SIZE size of the file FILE file name RES file data resident in memory in bytes For more details see fincore(1).

lsmem (list the ranges of available memory with their online status):

Usage: lsmem [options] List the ranges of available memory with their online status. Options: -J, --json use JSON output format -P, --pairs use key="value" output format -a, --all list each individual memory block -b, --bytes print SIZE in bytes rather than in human readable format -n, --noheadings don't print headings -o, --output <list> output columns --output-all output all columns -r, --raw use raw output format -S, --split <list> split ranges by specified columns -s, --sysroot <dir> use the specified directory as system root --summary[=when] print summary information (never,always or only) -h, --help display this help -V, --version display version Available output columns: RANGE start and end address of the memory range SIZE size of the memory range STATE online status of the memory range REMOVABLE memory is removable BLOCK memory block number or blocks range NODE numa node of memory ZONES valid zones for the memory range For more details see lsmem(1). New features/options

agetty + getty (alternative Linux getty):

--list-speeds display supported baud rates

blkid (locate/print block device attributes) gained a bunch of long options:

Options: --cache-file same as -c --no-encoding same as -d --garbage-collect same as -g --output same as -o --list-filesystems same as -k --match-tag same as -s --match-token same as -t --list-one same as -l --label same as -L --uuid same as -U Low-level probing options: --probe same as -p --info same as -i --size same as -S --offset same as -O --usages same as -u --match-types same as -n

dmesg (print or control the kernel ring buffer):

-p, --force-prefix force timestamp output on each line of multi-line messages

fallocate (preallocate or deallocate space to a file):

-i, --insert-range insert a hole at range, shifting existing data -x, --posix use posix_fallocate(3) instead of fallocate(2)

findmnt (find a filesystem):

--output-all output all available columns --pseudo print only pseudo-filesystems --real print only real filesystems --tree enable tree format output is possible

fstrim (discard unused blocks on a mounted filesystem):

-A, --fstab trim all supported mounted filesystems from /etc/fstab -n, --dry-run does everything, but trim

hwlock (read or set the hardware clock (RTC)):

-l same as --localtime --delay <sec> delay used when set new RTC time -v, --verbose display more details

lsblk (list block devices):

Options: -z, --zoned print zone model -T, --tree use tree format output --sysroot >dir< use specified directory as system root Available output columns: PATH path to the device node FSAVAIL filesystem size available FSSIZE filesystem size FSUSED filesystem size used FSUSE% filesystem use percentage PTUUID partition table identifier (usually UUID) PTTYPE partition table type ZONED zone model

lscpu (display information about the CPU architecture):

-J, --json use JSON for default or extended format

lslocks (list local system locks):

Options: -b, --bytes print SIZE in bytes rather than in human readable format --output-all output all columns Available output columns: TYPE kind of lock

lslogins (display information about known users in the system):

Options: --output-all output all columns Available output columns: PWD-METHOD password encryption method

lsns (list namespaces):

Options: --output-all output all columns -W, --nowrap don't use multi-line representation Available output columns: NETNSID namespace ID as used by network subsystem NSFS nsfs mountpoint (usually used network subsystem)

nsenter (run program with namespaces of other processes):

-a, --all enter all namespaces --output-all output all columns -S, --sector-size <num> overwrite sector size --list-types list supported partition types and exit

rename.ul (rename files):

-n, --no-act do not make any changes -o, --no-overwrite don't overwrite existing files -i, --interactive prompt before overwrite

runuser (run a command with substitute user and group ID):

-w, --whitelist-environment <list> don't reset specified variables -P, --pty create a new pseudo-terminal

setsid (run a program in a new session):

-f, --fork always fork

setterm (set terminal attributes):

--resize reset terminal rows and columns

unshare (run program with some namespaces unshared from parent):

--kill-child[=<signame>] when dying, kill the forked child (implies --fork), defaults to SIGKILL

wipefs (wipe a signature from a device):

Options: -i, --noheadings don't print headings -J, --json use JSON output format -O, --output <list> COLUMNS to display (see below) Available output columns: UUID partition/filesystem UUID LABEL filesystem LABEL LENGTH magic string length TYPE superblok type OFFSET magic string offset USAGE type description DEVICE block device name

zramctl (set up and control zram devices):

-a, --algorithm lzo|lz4|lz4hc|deflate|842 compression algorithm to use (new compression algorithms lz4hc, deflate + 842) --output-all output all columns Deprecated and removed options

hwlock (read or set the hardware clock (RTC)):

--badyear ignore RTC's year because the BIOS is broken -c, --compare periodically compare the system clock with the CMOS clock --getepoch print out the kernel's hardware clock epoch value --setepoch set the kernel's hardware clock epoch value to the value given with --epoch

unshare (run program with some namespaces unshared from parent):

-s (use --setgroups instead)

Dirk Eddelbuettel: Rcpp 1.0.2: Small Polish

Friday 26th of July 2019 12:56:00 AM

The second maintenance release of Rcpp, following up on the 10th anniversary and the 1.0.0. release, was prepared last Saturday and released to both the Rcpp drat repo and CRAN. Following all the manual inspection (including a false positive result from reverse dependencies), it has finally arrived on CRAN earlier today. The corresponding Debian package was also uploaded, and binaries have since been built.

Just like for Rcpp 1.0.1, we have a four month gap between releases which seems appropriate given both the changes still being made (see below) and the relative stability of Rcpp. It still takes work to release this as we run multiple extensive sets of reverse dependency checks so maybe one day we will switch to six month cycle.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1713 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 176 in BioConductor. Per the (partial) logs of CRAN downloads, we have had over one million downloads a month following the previous release.

This release features a number of different pull requests by four different contributors as detailed below.

Changes in Rcpp version 1.0.2 (2019-07-20)
  • Changes in Rcpp API:

    • Files in src/ are now consistentely lowercase (Dirk in #956).

    • The Rcpp 'API Version' is now accessible via getRcppVersion() (Dirk in #963).

  • Changes in Rcpp Attributes:

    • The second END wrapper macro also gets UNPROTECT and a variable reference suppressing compiler warnings (Dirk in #953 fixing #951).

    • Default function arguments are parsed correctly (Pierrick Roger in #977 fixing #975)

  • Changes in Rcpp Sugar:

    • Added decreasing parameter to sort_unique() (James Balamuta in #958 addressing #950).
  • Changes in Rcpp Deployment:

    • Travis CI unit tests are now always running irrespective of the package version (Dirk in #954).
  • Changes in Rcpp Documentation:

    • The Rcpp-modules vignette now covers the RCPP_EXPOSED_* macros, and the Rcpp-extending vignette references it (Ralf Stubner in #959 fixing #952)

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Dowland: Beatrice Dowland

Thursday 25th of July 2019 02:26:15 PM

My second daughter, Beatrice Dowland, was born in the last week or so; we are all healthy and happy (but tired). I'm taking most of August off from work (and similar activities). See you soon!

(previously)

Jose M. Calhariz: at daemon 3.2.0

Wednesday 24th of July 2019 11:38:00 PM

There is a new version of at daemon, 3.2.0. It was implemented some new features, so the bump on the minor version.

You can download the source and the signature from http://software.calhariz.com/at/

The changelog:

at 3.2.0 (2019-07-24): Jose M Calhariz Print time of new job before the input of the commands, Closes #863045 Do not drop seconds on -t option, Closes #792040 Start using nice levels from 0 instead of 2. Closes #519716 Correctly handle DST when specifying a UTC time. Closes #364975 Gerhard Poul: Add flag to send email to other user. MR 5

Hideki Yamane: mmdebstrap is nice tool, but newest deboostrap is not so bad :)

Wednesday 24th of July 2019 01:02:29 PM
mmdebstrap is fast because it uses apt for package dependency resolution and download. Yeah, it's true, almost right - but most of the reason for "fast" is just about "downloading packages", I guess.

debootstrap uses wget for download packages, it's serial execution so it waits for each download and mmdebstrap - apt does not do so. If you use "--cache-dir" option for debootstrap, exec time is almost the same.

$ time sudo mmdebstrap unstable unstable-chroot
(snip)
real 2m58.670s
user 0m23.559s
sys 0m26.387s
$ time sudo debootstrap sid sid
(snip)
real 7m22.955s
user 0m57.450s
sys 0m37.894s $ time sudo debootstrap --cache-dir=/home/henrich/tmp/cache sid sid
(snip)
real 2m44.752s
user 0m54.504s
sys 0m33.666s
Anyway, I should consider "--use-apt" option or something for debootstrap - for future release :)

Elana Hashman: How to grant (Tom Marble) Debian Maintainer access

Wednesday 24th of July 2019 04:00:00 AM

I run the Debian Clojure Team, which means that occasionally folks volunteer to help out with Clojure packaging. This is awesome! Since I'm lazy, I don't want to have to sponsor every package upload for folks who have proven their aptitude at packaging. Hence, sometimes I need to grant Debian Maintainers upload access to team packages.

Folks typically point at this email as documentation of how to grant DM access on packages. However, I have zero desire to hand-craft artisanal dak commands. So, I try to leverage some existing tools I already have installed on my system to help me out—namely, the dcut tool from the dput-ng package.

The commands

Tom Marble wanted DM access to the libjava-jdbc-clojure package, after I suggested he try doing a new version upload for it. I previously gave him DM access to maintain shimdandy and com-hypirion-io-clojure. But I couldn't remember exactly how I did it...

According to the dcut manpage, this should be as simple as running

dcut dm --uid "Tom Marble" --allow libjava-jdbc-clojure

However, there is a slight problem: I don't normally run dput (or dcut) on a machine with my Debian key present, since I keep my only copy on my laptop. For various reasons (mostly related to intertia, external monitors, and wifi drivers), I run Linux Mint on my laptop, and the version of dcut available there doesn't actually work properly, so I can't just run dcut locally...

What to do about this?

It turns out that there is an undocumented flag, -S or --save, that will save the generated commands locally.

dcut -s -S dm --uid "Tom Marble" --allow libjava-jdbc-clojure

The -s flag, or --simulate, ensures that we don't try to upload the file to the archive just yet. This will produce a file in the current directory with a name similar to ehashman-1564016122.dak-commands. Take a look:

ehashman@corn-syrup:~$ cat ehashman-1564016122.dak-commands Archive: ftp.upload.debian.org Uploader: Elana Hashman <ehashman@debian.org> Action: dm Fingerprint: 884A52C4AC8ABB931D158FA840BFEE868B055D9A Allow: libjava-jdbc-clojure

Now is a good time to verify that the key and package is correct. You can then sign this file:

gpg --sign --armour --clearsign ehashman-1564016122.dak-commands

And use dcut to upload it:

dcut upload -f ehashman-1564016122.dak-commands

Once the file has been processed, check the FTP Master DM log to make sure your DM changes have been set correctly.

See you on the next episode of "me creating problems for myself with scary Debian tools"

Aigars Mahinovs: Debconf 19 photos

Tuesday 23rd of July 2019 04:02:42 PM

The main feed for my photos from Debconf 19 in Curitiba, Brazil is currently in my GPhoto album. I will later also sync it to Debconf git share.

The first batch is up, but now the hardest part comes - the group photo will be happening a bit later today :)

Update: the group photo is ready! The smaller version is in the GPhoto album, but full version is linked from DebConf/19/Photos

Update 2: The day trip phtos are up and also the photos are in Debconf Git LFS share.

Molly de Blanc: Free software activities (June 2019)

Tuesday 23rd of July 2019 02:14:29 PM

I know this is almost a month late, but I am sharing it nonetheless. My June was dominated by my professional and personal life, leaving little time for expansive free software activities. I’ll write a little more in my OSI report for June.

Activities (Personal)
  • The biggest thing I did was head over to the Other Cambridge (a.k.a. Cambridge Prime, a.k.a. Cambridge, UK) for a Debian sprint with the Debian Project Leader, Debian Account Managers, and Debian Anti-Harassment team.
  • We had some Anti-Harassment meetings.
  • We had some Outreach meetings.
  • I helped both teams prep for DebConf.
Activities (Professional)
  • Worked on organizing sponsorships for GUADEC. If you’re interested in attending or sponsoring GUADEC, I highly recommend it!
  • Wrote profiles of members of the GNOME community for the GNOME Engagement blog. I also wrote a newsletter for Friends of GNOME. You can see both online.
  • Attended Diversity & Inclusion team meetings, participated in the Engagement team discussions, and spoke with several GUADEC organizers.

More in Tux Machines

Late Coverage of Confidential Computing Consortium

  • Microsoft Partners With Google, Intel, And Others To Form Data Protection Consortium

    The software maker joined Google Cloud, Intel, IBM, Alibaba, Arm, Baidu, Red Hat, Swisscom, and Tencent to establish the Confidential Computing Consortium, a group committed to providing better private data protection, promoting the use of confidential computing, and advancing open source standards among members of the technology community.

  • #OSSUMMIT: Confidential Computing Consortium Takes Shape to Enable Secure Collaboration

    At the Open Source Summit in San Diego, California on August 21, the Linux Foundation announced the formation of the Confidential Computing Consortium. Confidential computing is an approach using encrypted data that enables organizations to share and collaborate, while still maintaining privacy. Among the initial backers of the effort are Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. “The context of confidential computing is that we can actually use the data encrypted while programs are working on it,” John Gossman, distinguished engineer at Microsoft, said during a keynote presentation announcing the new effort. Initially there are three projects that are part of the Confidential Computing Consortium, with an expectation that more will be added over time. Microsoft has contributed its Open Enclave SDK, Red Hat is contributing the Enarx project for Trusted Execution Environments and Intel is contributing its Software Guard Extensions (SGX) software development kit. Lorie Wigle, general manager, platform security product management at Intel, explained that Intel has had a capability built into some of its processors called software guard which essentially provides a hardware-based capability for protecting an area of memory.

Graphics: Mesa Radeon Vulkan Driver and SPIR-V Support For OpenGL 4.6

  • Mesa Radeon Vulkan Driver Sees ~30% Performance Boost For APUs

    Mesa's RADV Radeon Vulkan driver just saw a big performance optimization land to benefit APUs like Raven Ridge and Picasso, simply systems with no dedicated video memory. The change by Feral's Alex Smith puts the uncached GTT type at a higher index than the visible vRAM type for these configurations without dedicated vRAM, namely APUs.

  • Intel Iris Gallium3D Is Close With SPIR-V Support For OpenGL 4.6

    This week saw OpenGL 4.6 support finally merged for Intel's i965 Mesa driver and will be part of the upcoming Mesa 19.2 release. Not landed yet but coming soon is the newer Intel "Iris" Gallium3D driver also seeing OpenGL 4.6 support. Iris Gallium3D has been at OpenGL 4.5 support and is quite near as well with its OpenGL 4.6 support thanks to the shared NIR support and more with the rest of the Intel open-source graphics stack. Though it's looking less likely that OpenGL 4.6 support would be back-ported to Mesa 19.2 for Iris, but we'll see.

The GPD MicroPC in 3 Minutes [Video Review]

In it I tackle the GPD MicroPC with Ubuntu MATE 19.10. I touch on the same points made in my full text review, but with the added bonus of moving images to illustrate my points, rather than words. Read more Also: WiringPi - Deprecated

today's howtos