Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 6 hours 42 min ago

Dirk Eddelbuettel: RcppExamples 0.1.9

8 hours 1 min ago

A new version of the RcppExamples package is now on CRAN.

The RcppExamples package provides a handful of short examples detailing by concrete working examples how to set up basic R data structures in C++. It also provides a simple example for packaging with Rcpp.

This releases brings a number of small fixes, including two from contributed pull requests (extra thanks for those!), and updates the package in a few spots. The NEWS extract follows:

Changes in RcppExamples version 0.1.9 (2019-08-24)
  • Extended DateExample to use more new Rcpp features

  • Do not print DataFrame result twice (Xikun Han in #3)

  • Missing parenthesis added in man page (Chris Muir in #5)

  • Rewrote StringVectorExample slightly to not run afould the -Wnoexcept-type warning for C++17-related name mangling changes

  • Updated NAMESPACE and RcppExports.cpp to add registration

  • Removed the no-longer-needed #define for new Datetime vectors

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steinar H. Gunderson: Chess article

13 hours 54 min ago

Last November (!), I was interviewed for a magazine article about computer chess and how it affects human play. Only a few short fragments remain of the hour-long discussion, but the article turned out to be very good nevertheless, and now it's freely available at last. Recommended Sunday read.

Thomas Lange: New FAI.me feature

18 hours 10 min ago

FAI.me, the build service for installation and cloud images has a new feature. When building an installation images, you can enable automatic reboot or shutdown at the end of the installation in the advanced options. This was implemented due to request by users, that are using the service for their VM instances or computers without any keyboard connected.

The FAI.me homepage.

FAI.me

Didier Raboud: miniDebConf19 Vaumarcus – Oct 25-27 2019 – Registration is open

19 hours 9 min ago

The Vaumarcus miniDebConf19 is happening! Come see the fantastic view from the shores of Lake Neuchâtel, in Switzerland! We’re going to have two-and-a-half days of presentations and hacking in this marvelous venue and anybody interested in Debian development is welcome.

Registration is open

Registration is open now, and free, so go add your name and details on the Debian wiki: Vaumarcus/Registration

We’ll accept registrations until late, but don’t wait too much before making your travel plans! We have you covered with a lot of attendee information already: Vaumarcus.

Sponsors wanted

We’re looking for sponsors willing to help making this event possible; to help making it easier for anyone interested to attend. We have not yet decided upon sponsor categories and benefits, but come talk to us already if you can help!

More hands wanted

Things are on a good track, but we need more help. Specifically, Content, Bar, Sponsoring and Attendee support would benefit from more hands.

Get in touch

We gather on the #debian.ch channel on irc.debian.org and on the debian-switzerland@lists.debian.org list. For more private matters, talk to board@debian.ch!

Looking forward to seeing a lot of you in Vaumarcus!

(This was also sent to debian-devel-announce@l.d.o and debian-events-eu@l.d.o.)

Joachim Breitner: ICFP 2019

Saturday 24th of August 2019 06:35:04 AM

ICFP 2019 in Berlin ended yesterday, and it was – as always – a great pleasure. This year was particularly noteworthy for the quite affordable conference hotel and the absolutely amazing food during the coffee breaks.

Since I am no longer a proper academic, I unsurprisingly did not have real research to present. Luckily I found ways to not just be a passive participant this year:

  • At FARM, I presented Kaleidogen, a small game (or toy, some would say) of mine. The room was packed with people, so thanks for all your interest! If you missed it, you can soon see the recording or read the demo abstract.

  • At PLMW, the mentoring workshop for young researchers, I ran the “Social event” together with Niki Vazou. Like last year, we randomly grouped the students and held a little competition where they had to match program listings to languages and algorithms. This was great fun, and we even managed to solve the sudden problem of two ties in a ad-hoc extra quiz.

  • During his “State of GHC” speech, Simon Peyton Jones asked me to speak about the GHC Proposal Process for a few slides.

  • And since that is not enough stage time, I secured two spots in local stand-up comedy open mics on Monday and Friday, and even dragged sizable crowds of ICFP participants to these venues. One was a boat, and the other one a pretty dodgy bar in Neukölln, so that alone was a memorable experience. And the host was visibly surprised when his joke “I couldn’t be a software developers – I can’t commit” was met by such a roaring response…

Anyways, ICFP is over, back to disappear in the churn of every day work, and I hope to see you all next year.

Russ Allbery: Review: Thinking, Fast and Slow

Saturday 24th of August 2019 04:30:00 AM

Review: Thinking, Fast and Slow, by Daniel Kahneman

Publisher: Farrar, Straus and Giroux Copyright: 2011 ISBN: 1-4299-6935-0 Format: Kindle Pages: 448

Daniel Kahneman is an academic psychologist and the co-winner of the 2002 Nobel Memorial Prize in Economic Sciences for his foundational work on behavioral economics. With his long-time collaborator Amos Tversky, he developed prospect theory, a theory that describes how people chose between probabilistic alternatives involving risk. That collaboration is the subject of Michael Lewis's book The Undoing Project, which I have not yet read but almost certainly will.

This book is not only about Kahneman's own work, although there's a lot of that here. It's a general overview of cognitive biases and errors as explained through an inaccurate but useful simplification: modeling human thought processes as two competing systems with different priorities, advantages, and weaknesses. The book mostly focuses on the contrast between the fast, intuitive System One and the slower, systematic System Two, hence the title, but the last section of the book gets into hedonic psychology (the study of what makes experiences pleasant or unpleasant). That section introduces a separate, if similar, split between the experiencing self and the remembering self.

I read this book for the work book club, although I only got through about a third of it before we met to discuss it. For academic psychology, it's quite readable and jargon-free, but it's still not the sort of book that's easy to read quickly. Kahneman's standard pattern is to describe an oddity in thinking that he noticed, a theory about the possible cause, and the outcome of a set of small experiments he and others developed to test that theory. There are a lot of those small experiments, and all the betting games with various odds and different amounts of money blurred together unless I read slowly and carefully.

Those experiments also raise the elephant in the room, at least for me: how valid are they? Psychology is one of the fields facing a replication crisis. Researchers who try to reproduce famous experiments are able to do so only about half the time. On top of that, many of the experiments Kahneman references here felt artificial. In daily life, people spend very little time making bets of small amounts of money on outcomes with known odds. The bets are more likely to be for more complicated things such as well-being or happiness, and the odds of most real-world situations are endlessly murky. How much does that undermine Kahneman's conclusions? Kahneman himself takes the validity of this type of experiment for granted and seems uninterested in this question, at least in this book. He has a Nobel Prize and I don't, so I'm inclined to trust him, but it does give me some pause.

It didn't help that Kahneman cites the infamous marshmallow experiment approvingly and without caveats, which is a pet peeve of mine and means he fails my normal test for whether a popular psychology writer has taken a sufficiently thoughtful approach to analyzing the validity of experiments.

That caveat aside, this book is fascinating. One of the things that Kahneman does throughout, which is both entertaining and convincing, is show the reader one's brain making mistakes in real time. It's a similar experience to looking at optical illusions (indeed, Kahneman makes that comparison explicitly). Once told what's going on, you can see the right answer, but your brain is still determined to make an error.

Here's an example:

A bat and ball cost $1.10.
The bat costs one dollar more than the ball.
How much does the ball cost?

I've prepped you by talking about cognitive errors, so you will probably figure out that the answer is not 10 cents, but notice how much your brain wants the answer to be 10 cents, and how easy it is to be satisfied with that answer if you don't care that much about the problem, even though it's wrong. The book is full of small examples like this.

Kahneman's explanation for the cognitive mistake in this example is the subject of the first part of the book: two-system thinking. System one is fast, intuitive, pattern-matching, and effortless. It's our default, the system we use to navigate most of our lives. System two is deliberate, slow, methodical, and more accurate, but it's effortful, to a degree that the effort can be detected in a laboratory by looking for telltale signs of concentration. System two applies systematic rules, such as the process for multiplying two-digit numbers together or solving math problems like the above example correctly, but it takes energy to do this, and humans have a limited amount of that energy. System two is therefore lazy; if system one comes up with a plausible answer, system two tends to accept it as good enough.

This in turn provides an explanation for a wealth of cognitive biases that Kahneman discusses in part two, including anchoring, availability, and framing. System one is bad at probability calculations and relies heavily on availability. For example, when asked how common something is, system one will attempt to recall an example of that thing. If an example comes readily to mind, system one will decide that it's common; if it takes a lot of effort to think of an example, system one will decide it's rare. This leads to endless mistakes, such as worrying about memorable "movie plot" threats such as terrorism while downplaying the risks of far more common events such as car accidents and influenza.

The third part of the book is about overconfidence, specifically the prevalent belief that our judgments about the world are more accurate than they are and that the role of chance is less than it actually is. This includes a wonderful personal anecdote from Kahneman's time in the Israeli military evaluating new recruits to determine what roles they would be suited for. Even after receiving clear evidence that their judgments were no better than random chance, everyone involved kept treating the interview process as if it had some validity. (I was pleased by the confirmation of my personal bias that interviewing is often a vast waste of everyone's time.)

One fascinating takeaway from this section is that experts are good at making specific observations of fact that an untrained person would miss, but are bad at weighing those facts intuitively to reach a conclusion. Keeping expert judgment of decision factors but replacing the final decision-making process with a simple algorithm can provide a significant improvement in the quality of judgments. One example Kahneman uses is the Apgar score, now widely used to determine whether a newborn is at risk of a medical problem.

The fourth part of the book discusses prospect theory, and this is where I got a bit lost in the endless small artificial gambles. However, the core idea is simple and quite fascinating: humans tend to make decisions based on the potential value of losses and gains, not the final outcome, and the way losses and gains are evaluated is not symmetric and not mathematical. Humans are loss-avoiding, willing to give up expected value to avoid something framed as a loss, and are willing to pay a premium for certainty. Intuition also breaks down at the extremes; people are very bad at correctly understanding odds like 1%, instead treating it like 0% or more than 5% depending on the framing.

I was impressed that Kahneman describes the decision-making model that preceded prospect theory, explains why it was more desirable because it was simpler and was only abandoned for prospect theory because prospect theory made meaningfully more accurate predictions, and then pivots to pointing out the places where prospect theory is clearly wrong and an even more complicated model would be needed. It's a lovely bit of intellectual rigor and honesty that too often is missing from both popularizations and from people talking about their own work.

Finally, the fifth section of the book is about the difference between life as experienced and life as it is remembered. This includes a fascinating ethical dilemma: the remembering self is highly sensitive to how unpleasant an experience was at its conclusion, but remarkably insensitive to the duration of pain. Experiments will indicate that someone will have a less negative memory of a painful event where the pain gradually decreased at the end, compared to an event where the pain was at its worst at the end. This is true even if the worst moment of pain was the same in both cases and the second event was shorter overall. How should we react to that in choosing medical interventions? The intuitive choice for pain reduction is to minimize the total length of time someone is in pain or reduce the worst moment of pain, both of which are correctly reported as less painful in the moment. But this is not the approach that will be remembered as less painful later. Which of those experiences is more "real"?

There's a lot of stuff in this book, and if you are someone who (unlike me) is capable of reading more than one book at a time, it may be a good book to read slowly in between other things. Reading it straight through, I got tired of the endless descriptions of experimental setup. But the two-system description resonated with me strongly; I recognized a lot of elements of my quick intuition (and my errors in judgment based on how easy it is to recall an example) in the system one description, and Kahneman's description of the laziness of system two was almost too on point. The later chapters were useful primarily as a source of interesting trivia (and perhaps a trick to improve my memory of unpleasant events), but I think being exposed to the two-system model would benefit everyone. It's a quick and convincing way to remember to be wary of whole classes of cognitive errors.

Overall, this was readable, only occasionally dense, and definitely thought-provoking, if quite long. Recommended if any of the topics I've mentioned sound interesting.

Rating: 7 out of 10

Bastian Venthur: Introducing Noir

Friday 23rd of August 2019 10:00:00 PM
tl;dr

Noir is a drop-in replacement for Black (the uncompromising code formatter), with the default line length set to PEP-8's preferred 79 characters. If you want to use it, just replace black with noir in your requirements.txt and/or setup.py and you're good to go.

Black is a Python code formatter that reformats your code to make it more PEP-8 compliant. It implements a subset of PEP-8, most notably it deliberately ignores PEP-8's suggestion for a line length of 79 characters and defaults to a length of 88. I find the decision and the reasoning behind that somewhat arbitrary. PEP-8 is a good standard and there's a lot of value in having a style guide that is generally accepted and has a lot of tooling to support it.

When people ask to change Black's default line length to 79, the issue is usually closed with a reference to the reasoning in the README. But Black's developers are at least aware of this controversial decision, as Black's only option that allows to configure the (otherwise uncompromising) code formatter, is in fact the line length.

Apart from that, Black is a good formatter that's gaining more and more popularity. And, of course, the developers have every right to follow their own taste. However, since Black is licensed under the terms of the MIT license, I tried to see what needs to be done in order to fix the line length issue.

Step 1: Changing the Default

This is the easiest part. You only have to change the DEFAULT_LINE_LENGTH value in black.py from 88 to 79, and black works as expected. Bonus points for doing the same in black.vim and pyproject.toml, but not strictly necessary.

Step 2a: Fixing the Tests

Now comes the fun part. Black has an extensive test suite and suddenly a lot of tests are failing because the fixtures that compare the unformatted input with the expected, formatted output were written with a line length of 88 characters in mind. To make it more interesting the expected output comes in two forms: (1) as normal reformatted Python code (which is rather easy to fix) and (2) as a diff between the input and the expected output. The latter was really painful to fix -- although I'm very much used to reading diffs, I don't usually write them.

Step 2b: Fixing the Tests

After all fixtures were updated, some tests were still failing. And it turned out that Black is running itself on its own source code as part of its test suite, making the tests fail if Black's code does not conform to Black's coding standards. While this is a genius idea, it meant that I had to reformat Black's code to match the new 79 characters line length, generating a giant diff, that is functionally unrelated to the fix I wanted to make but now part of the fix anyway. This of course makes the whole patch horrible to maintain if you plan to follow along upstream's master branch.

Step 3: Publish

Since we already got this far, why not publish the fixed version of Black? To my surprise the name noir was still available on PyPi, so I renamed my version of Black to Noir and uploaded it to PyPi.

You can install it via:

$ pip install noir

Since I didn't change anything else, this is literally a drop-in replacement for Black. All you have to do is replace black with noir in your requirements.txt and/or setup.py and you're good to go. The script that executes Black is still called black and the server is still called blackd.

Outlook

While this was a fun exercise, the question remains what to do with it. I'll try to follow upstream and update my patch whenever a new version will come out. As new versions of Black are released only a handful of times a year, this might be feasible.

Depending on how painful it is to maintain the patch for the tests, I might either drop the tests altogether, relying on upstream's tests passing on their side and just maintaining the trivial patch from Step 1: Changing the DEFAULT_LINE_LENGTH. The latter can probably be automated somehow using github actions -- and I'll probably look into that at some point.

Best case scenario, of course, would be if Python changes its recommended line length to 88 and I wouldn't have to maintain noir in the first place :)

Iustin Pop: Aftershokz Aeropex first impressions

Friday 23rd of August 2019 09:54:26 PM

I couldn’t sleep one evening so I was randomly1 browsing the internet. One thing led to another and I landed on a review of “bone-conducting” headphones, designed for safe listening to music or talking on the phone during sports.

I was intrigued. I’ve written before that proper music really motivates me when doing high-intensity efforts, so this seemed quite interesting. After reading more about it, and after finding that one can buy such things from local shops, I ordered a pair of Aftershokz Aeropex headphones.

To my surprise, they actually work as advertised. I’d say, they work despite the fancy company name :) There is a slight change to the tone of the sound (music) as compared to normal headphones, and the quality is not like one would expect from high-quality over-ear ones, but that’s beside the point - the kind of music that I’d like to listen to while pedalling up a hill doesn’t require very high fidelity2.

And with regards to environment awareness, there is for sure some decrease, but I’d say minimal (especially if you don’t listen on high volume). There is no “closed bubble” effect at all as you get with normal (even open) headphones, and definitely not the one with in-ear ones. So I’d say this kind of headphone is reasonably safe, if you are careful.

So, first test, commute to work and back. On the way to work it was very windy so that’s why I was hearing mostly (especially during cross-winds), but it was still OK. Enjoyed the ride, nothing special.

On the return though… it was quite glorious. Normally (in Garmin speak) I get a small training effect: 0.8-1.0 aerobic, and much less anaerobic, around 0.5. It’s a very short commute, but I try to push as I can. Today however, I got 1.3 aerobic, and 1.6 anaerobic, because I went quite a bit standing on the uphills. Higher anaerobic than aerobic on my commute is very rare… Also the “intensity minutes” that I got for today were ~50% increased compared to usual commute days. Max HR was not really changed, but the average HR was ~10bpm higher, which confirms I was able to motivate myself better. No Strava segments achievements though, since I was on a slow bike, but still, it felt much better than same bike on other days.

I don’t know how the headphones feel when wearing them for a few hours at a time; they might be somewhat unpleasant, especially under the bike helmet, but on my short commute they were OK. But a 2-3-5 hour race is something entirely different.

Anyway, it seems from my first quick test this is an interesting technology. I guess I’ll have to see in a real effort how it helps? And if it doesn’t work well, I can blame the choice of music :)

  1. I was looking for updated Fenix 6 rumours. Either Garmin is having a prank or it (the F6) will be quite cool itself; bigger screen, solar, more battery options, etc. etc.

  2. Rhythm/beat is very important, not so much good voice or high dynamic range. And when tired, most anything that is not soothing.

Kai-Chung Yan: My Open-Source Activities from January to August 2019

Friday 23rd of August 2019 06:42:06 AM

Welcome, reader! This is a infrequently updated post series that logs my activities within open-source communities. I do not work on open-source full-time, although I sincerely would love to. Therefore the posts may cover a ridiculously long period (even a whole year).

Debian & Google Summer of Code

Debian is a general-purpose Linux distribution that is widely used on the planet. I am a Debian Developer who works on packages related to Android SDK and the Java ecosystem.

I started a new package in an attempt to build the Android framework android.jar using the upstream build systems involving Ninja, Soong and others. Since the beginning we have been writing our own (very simple) makefiles to build the binaries in AOSP because their build logic tends to be simple and straightforward, until we worked on android.jar. Building it requires digging in so much code that it became incredibly hard to maintain, which is why we still haven’t brought in any newer version since android-framework-23. This is problematic as developers can’t build any apps that target Android 7+.

After a month of work, this package is finally done. After all its dependencies are packaged in the future, it will be good to upload. This is where the students of Google Summer of Code (GSoC) come in!

This year’s GSoC projects related to Android SDK are:

Thanks to their hard work, we managed to upload these packages to Debian:

Voidbuilder

Voidbuilder is a simple program that mimics pbuilder but uses Docker and requires zero configuration. I have been using it privately and am quite satisfied.

I made some bugfixes and adopted Node.js 12 so that it can make use the latest experimental ES Modules support. Version 1.0.0 and 1.0.1 have been released.

Kai-Chung Yan: My Open-Source Activities from April 2017 to March 2018

Friday 23rd of August 2019 06:40:36 AM

Because of all the nonsense coming from my current school, I hadn’t been able to spend too much time on open source projects. As a result, this post sums up an entire year of activities after the previous one… Surprised me a bit too.

Kai-Chung Yan: Attending FOSDEM 2016: First Touch in Open Source Community

Friday 23rd of August 2019 06:05:32 AM

FOSDEM 2016 happened at the end of January, but I have been too busy to write my first trip to an open source event.

FOSDEM takes place in Belgium, which is almost ten thousand kilometers from my home. Luckily, Google kindly offered sponsorship for traveling to Belgium and lodging places for former GSoC mentors and students in Debian, which made my travel possible without giving my dad headaches. Thank you Google!

Open source meetings are really fun. Imagine you have been working hard on an exciting project with several colleagues around the world who have never met you, and now you have a chance to meet them and make friends with them, cool! However I am not involved with any project too deeply, so I don’t have too much expectations on this. But I’m still excited when I first saw my mentor Hans-Christoph Steiner! Pity that we forgot to take a picture, as I’m not those kind of people who like to take selfies every day.

One of the most interesting projects I saw during FOSDEM is Ring. Ring is a distributed communication software without central servers. All Ring clients in the world are connected to several others and find a particular user using a distributed hashtable. A Ring client is a key pair, whose public key serves as the ID. Thus, Ring is anti-censorshiping, anti-eavesdropping, which is great for China citizens and feared by the China government. After I got home I knew another similar but older project Tox, which seems to more feature-rich than Ring but still not sufficient for promoting it. There’s a huge disadvantage of both project, which is high battery drainage on Android. Hope someday they will improve it.

At the end of FOSDEM I joined the volunteers to do the clean up. We cleaned all the buildings, restored the rooms and finally shared the dinner at the hall of K Building. I’m not a European so I didn’t talk too much to them, but this is really an unforgettable experience. Hope I can join the next FOSDEM soon.

Kai-Chung Yan: Introducing Gradle 1.12 in Debian

Friday 23rd of August 2019 06:05:08 AM

After 5 weeks of work, my colleague Komal Sukhani and I succeeded in bringing Gradle 1.12 with other packages into Debian. Here is a brief note of what we’ve done:

Note that both Gradle and Groovy are in experimental distribution because Groovy build-depends on Gradle, and Gradle build-depends on bnd 2.1.0, which is in experimental as well.

Updating these packages takes us an entire month because my summer vacation had not come yet until the day we uploaded Gradle and Groovy, which means we were doing the job in our spare time (Sukhani finished her semester at the beginning though).

Next step is to update Gradle to 2.4 as soon as possible because Sukhani has started her work on the Java part of Android SDK, which requires Gradle 2.2 or above. Before updating Gradle I need to package the Java SDK for AWS, which enables Gradle to access S3 resources. I also need to make gradle-1.12 as a separate package and use it to build gradle_2.4-1.

After that, I will start my work on the C/C++ part of Android SDK, which is far more complicated and messy than I had expected. Yet I enjoy the summer coding. Happy coding, all open source developers!

Finally, feel free to check out my weekly report in Debian’s mailing list:

Kai-Chung Yan: Google Summer of Code Started: Packaging Android SDK for Debian

Friday 23rd of August 2019 06:04:52 AM

And here it is: I am accepted as a GSoC 2015 student! Actually this has been a while since the result was out in the end of April. When I was applying for this GSoC, I never expected I could be accepted.

So what is Google Summer of Code, in case someone hasn’t heard about it at all? Google Summer of Code is an annual activity hosted by Google which gathers college students around the world to contribute to open source softwares. Every year hundreds of open source organizations join GSoC to provide project ideas and mentors, and thousands of students apply to and choose a project and work on it during the summer, and get well paid by Google if they manage to finish the task. This year we have 1051 students accepted with 49 from China and 2 from Taiwan. You can read more details from this post.

Although it says so from Geography textbooks and my Geography teacher, I had been not believing that India is a software giant, until I saw that India has the most students accepted and my partner on this project is a girl from India!

Project Details

The project we will work on this summer is to package Android SDK into Debian. In addition to that, we wil also update the existing packages that is essential to Android development, e.g. Gradle. Although some may say this project is not quite complicated, it still has lots of work to do, which makes it a rather large project that has two students working on it and a co-mentor. My primary mentor is Hans-Christoph Steiner from The Guardian Project and he also wrote a post about the project.

Why do we need to do this? There are reasons on security, convenience and ideal, but the biggest one for me is that if you use Linux and you write Android apps, or perhaps you are just ready to flash your device a CyanogenMod, there will be no better way than to just type sudo aptitude install adb. More infomation on this project can be found on Debian’s Android Tools Team page.

Problems We Are Facing

Currently (mid May) the offical beginning of coding phase has not yet arrived, but we have made a meeting on IRC and confirmed the largest problems we have so far.

The first problem is the packaging of Gradle. Gradle is a rather new and innovating build automation system, with which most Android apps and the Android SDK tools written in Java are built. It is a building system, so unsurprisingly it is built with itself. In this case, updating Gradle is much harder. Currently Gradle is version 2.4 but the one in Debian is 1.5. In the worst cases, we have to build all versions of Gradle from 1.6 to 2.4 one by one due to its self-dependency.

In reality, building a project with Gradle is way more easier and happier than any other build system because it handles the dependency in a brilliant way by downloading everything it needs, including Gradle itself. Thus it does not matter if you have installed Gradle or even if you are using Linux or Windows. However when building the Debian package, it seems that we have to abandoned the convenience and make it totally offline and rely only on the things in Debian. This is for security and reproducibility but the packaging will be much more complicated since we have to modify lots of code in the build scripts from upstream source. Also in such case, since the building is restricted to rely on the existing things in a Debian system, quite a few plugins that uses softwares that isn’t in Debian yet will be excluded from the Debian version of Gradle, which makes it less usable than simply launching the Gradle wrapper. In that case, I suppose there will be very few people really using the Gradle in Debian repository.

The second problem is how to determine which Git commit we should checkout from the Android SDK repository to build a particular version of the tools. Android SDK does not release its source code in tarball form, so we have to deal with the Git repository. What’s worse, the tools in Android SDK come from different repositories, and they have almost no infomation on the tools’ version number at all. We can’t confirm which commit or tag or branch in the repository corresponds to a particular version. And what’s way worse, Android SDK has 3 parts being SDK-tools, Build-tools and Platform-tools, each of which has defferent version numbers! And what’s way way worse, I have posted the question to various places and no one had answered me.

After our IRC discussion, we have been focusing on Gradle. I am still reading documentations about Debian packaging and using Gradle. All I hope now is that we can finish the project nice and fast and no pity will be left in this summer. Also I hope my GSoC T-shirt will be delivered to my home as soon as possible, it’s really cool!

Do You Want to Join GSoC as Well?

Surprisingly, most students in my school haven’t heard about Google Summer of Code at all, that is why there are only 2 accepted students from Taiwan. But if you know it and you study computer science (or in other ridiculous department related to computer science just like mine), do not hesitate and join the next year’s! Contribute to open source, and get highly paid (5500 USD this year), is it not really cool? Here I am offering you several tips.

Before I applied my proposal, I saw a guy from KDE wrote some tips with a shocking title. Reading that is enough I guess, but I still need to list some points:

  • Contact your potantial mentors even before you are writing your proposal, that really helps.
  • Remember to include a rough schedule in your proposal, it is very important.
  • Be interative to your mentor, ask good questions often.

Have fun in the summer!

Holger Levsen: 20190823-cccamp

Friday 23rd of August 2019 12:38:59 AM
Dialing 8874 on the local GSM and DECT networks

Dialing 8874 on the local GSM and DECT networks currently (it's 2:30 in the morning) let's you hear this automatic announcement: "The current temperature of the pool is 36.2 degrees" and said pool is like 15m away, temporarily built beneath a forest illuminated with disco balls...

I <3 cccamp.

Dirk Eddelbuettel: Rcpp now used by 1750 CRAN packages

Thursday 22nd of August 2019 12:58:00 AM

Since this morning, Rcpp stands at just over 1750 reverse-dependencies on CRAN. The graph on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time.

Rcpp was first released in November 2008. It probably cleared 50 packages around three years later in December 2011, 100 packages in January 2013, 200 packages in April 2014, and 300 packages in November 2014. It passed 400 packages in June 2015 (when I tweeted about it), 500 packages in late October 2015, 600 packages in March 2016, 700 packages last July 2016, 800 packages last October 2016, 900 packages early January 2017,
1000 packages in April 2017, 1250 packages in November 2017, and 1500 packages in November 2018. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is availble too.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent last summer, nine percent mid-December 2016, cracked ten percent in the summer of 2017 and eleven percent in 2018. We are currently at 11.83 percent: a little over one in nine packages. There is more detail in the chart: how CRAN seems to be pushing back more and removing more aggressively (which my CRANberries tracks but not in as much detail as it could), how the growth of Rcpp seems to be slowing somewhat outright and even more so as a proportion of CRAN – just like one would expect a growth curve to.

1750+ user packages is pretty mind-boggling. We can use the progression of CRAN itself compiled by Henrik in a series of posts and emails to the main development mailing list. Not that long ago CRAN itself did not have 1500 packages, and here we are at almost 14810 with Rcpp at 11.84% and still growing (though maybe more slowly). Amazeballs.

The Rcpp team continues to aim for keeping Rcpp as performant and reliable as it has been. A really big shoutout and Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Joey Hess: releasing two haskell libraries in one day: libmodbus and git-lfs

Wednesday 21st of August 2019 04:28:27 PM

The first library is a libmodbus binding in haskell.

There are a couple of other haskell modbus libraries, but none that support serial communication out of the box. I've been using a python library to talk to my solar charge controller, but it is not great at dealing with the slightly flakey interface. The libmodbus C library has features that make it more robust, and it also supports fast batched reads.

So a haskell interface to it seemed worth starting while I was doing laundry, and then for some reason it seemed worth writing a whole bunch more FFIs that I may never use, so it covers libmodbus fairly extensively. 660 lines of code all told.

Writing a good binding to a C library has art to it. I've seen ones that are so close you feel you're writing C and not haskell. On the other hand, some are so far removed from the underlying library that its documentation does not carry over at all.

I tried to strike a balance. Same function names so the extensive libmodbus documentation is easy to refer to while using it, but plenty of haskell data types so you won't mix up the parity with the stop bits.

And while it uses a mutable vector under the hood as the buffer for the FFI interface, so it can be just as fast as the C library, I also made functions for reading stuff like registers and coils be polymorphic so easier data types can be used at the expense of a bit of extra allocation.

The big win in this haskell binding is that you can leverage all the nice haskell libraries for dealing with binary data to parse the modbus data, rather than the ad-hoc integer and float conversion stuff from the C library.

For example, the Epever solar charge controller has its own slightly nonstandard way to represent 16 bit and 32 bit floats. Using the binary library to parse its registers in applicative style came out quite nice:

data Epever = Epever { pv_array_voltage :: Float , pv_array_current :: Float , pv_array_power :: Float , battery_voltage :: Float } deriving (Show) getEpever :: Get Epever getEpever = Epever <$> epeverfloat -- register 0x3100 <*> epeverfloat -- register 0x3101 <*> epeverfloat2 -- register 0x3102 (low) and 0x3103 (high) <*> epeverfloat -- register 0x3104 where epeverfloat = decimals 2 <$> getWord16host epeverfloat2 = do l <- getWord16host h <- getWord16host return (decimals 2 (l + h*2^16)) decimals n v = fromIntegral v / (10^n)

The second library is a git-lfs implementation in pure Haskell.

Emphasis on the pure -- there is not a scrap of IO code in this library, just 400+ lines of data types, parsing, and serialization.

I wrote it a couple weeks ago so git-annex can store files in a git-lfs remote. I've also used it as a git-lfs server, mostly while exploring interesting edge cases of git-lfs.

This work was sponsored by Jake Vosloo on Patreon.

Russ Allbery: Review: Trail of Lightning

Wednesday 21st of August 2019 03:28:00 AM

Review: Trail of Lightning, by Rebecca Roanhorse

Series: The Sixth World #1 Publisher: Saga Copyright: 2018 ISBN: 1-5344-1351-0 Format: Kindle Pages: 286

Maggie Hoskie is a monster hunter. Trained and then inexplicably abandoned by Neizghání, an immortal monster-slayer of her people, the Diné (Navajo), she's convinced that she's half-monster herself. Given that she's the sort of monster hunter who also kills victims that she thinks may be turned into monsters themselves, she may have a point. Apart from contracts to kill things, she stays away from nearly everyone except Tah, a medicine man and nearly her only friend.

The monster that she kills at the start of the book is a sign of a larger problem. Tah says that it was created by someone else using witchcraft. Maggie isn't thrilled at the idea of going after the creator alone, given that witchcraft is what Neizghání rescued her from in an event that takes Maggie most of the book to be willing to describe. Tah's solution is a partner: Tah's grandson Kai, a handsome man with a gift for persuasion who has never hunted a monster before.

If you've read any urban fantasy, you have a pretty good idea of where the story goes from there, and that's a problem. The hair-trigger, haunted kick-ass woman with a dark past, the rising threat of monsters, the protagonist's fear that she's a monster herself, and the growing romance with someone who will accept her is old, old territory. I've read versions of this from Laurell K. Hamilton twenty-five years ago to S.L. Huang's ongoing Cas Russell series. To stand out in this very crowded field, a series needs some new twist. Roanhorse's is the deep grounding in Native American culture and mythology. It worked well enough for many people to make it a Hugo, Nebula, and World Fantasy nominee. It didn't work for me.

I partly blame a throw-away line in Mike Kozlowski's review of this book for getting my hopes up. He said in a parenthetical note that "the book is set in Dinétah, a Navajo nation post-apocalyptically resurgent." That sounded great to me; I'd love to read about what sort of society the Diné might build if given the opportunity following an environmental collapse. Unfortunately, there's nothing resurgent about Maggie's community or people in this book. They seem just as poor and nearly as screwed as they are in our world; everyone else has just been knocked down even farther (or killed) and is kept at bay by magical walls. There's no rebuilding of civilization here, just isolated settlements desperate for water, plagued by local warlords and gangs, and facing the added misery of supernatural threats. It's bleak, cruel, and unremittingly hot, which does not make for enjoyable reading.

What Roanhorse does do is make extensive use of Native American mythology to shape the magic system, creatures, and supernatural world view of the book. This is great. We need a wider variety of magic systems in fantasy, and drawing on mythological systems other than Celtic, Greek, Roman, and Norse is a good start. (Roanhorse herself is Ohkay Owingeh Pueblo, not Navajo, but I assume without any personal knowledge that her research here is reasonably good.) But, that said, the way the mythology plays out in this book didn't work for me. It felt scattered and disconnected, and therefore arbitrary.

Some of the difficulty here is inherent in the combination of my unfamiliarity and the challenge of adopting real-world mythological systems for stories. As an SFF reader, one of the things I like from the world-building is structure. I like seeing how the pieces of the magical system fit together to build a coherent set of rules, and how the protagonists manipulate those rules in the story. Real-world traditions are rarely that neat and tidy. If the reader is already familiar with the tradition, they can fill in a lot of the untold back story that makes the mythology feel more coherent. If the author cannot assume that knowledge, they can get stuck between simplifying and restructuring the mythology for easy understanding or showing only scattered and apparently incoherent pieces of a vast system. I think the complaints about the distorted and simplified version of Celtic mythology in a lot of fantasy novels from those familiar with the real thing is the flip-side to this problem; it's worse mythology, but it may be more approachable storytelling.

I'm sure it didn't help that one of the most important mythological figures of this book is Coyote, a trickster god. I have great intellectual appreciation for the role of trickster gods in mythological systems, but this is yet more evidence that I rarely get along with them in stories. Coyote in this story is less of an unreliable friend and more of a straight-up asshole who was not fun to read about.

That brings me to my largest complaint about this novel: I liked exactly one person in the entire story. Grace, the fortified bar owner, is great and I would have happily read a book about her. Everyone else, including Maggie, ranged from irritating to unbearably obnoxious. I was saying the eight deadly words ("I don't care what happens to these people") by page 100.

Here, tastes will differ. Maggie acts the way that she does because she's sitting on a powder keg of unprocessed emotional injury from abuse, made far worse by Neizghání's supposed "friendship." It's realistic that she shuts down, refuses to have meaningful conversations, and lashes out at everyone on a hair trigger. I felt sympathy, but I didn't like her, and liking her is important when the book is written in very immediate present-tense first person. Kai is better, but he's a bit too much of a stereotype, and I have an aversion to supposedly-charming men. I think some of the other characters could have been good if given enough space (Tah, for instance), but Maggie's endless loop of self-hatred doesn't give them any room to breathe.

Add on what I thought were structural and mechanical flaws (the first-person narration is weirdly specific and detail-oriented in a way that felt like first-novel mechanical problems, and the ending is one of the least satisfying and most frustrating endings I have ever read in a book of this sort) and I just didn't like this. Clearly there are a lot of people nominating and voting for awards who think I'm wrong, so your mileage may vary. But I thought it was unoriginal except for the mythology, unsatisfying in the mythology, and full of unlikable characters and unpleasant plot developments. I'm unlikely to read more in this series.

Followed by Storm of Locusts.

Rating: 4 out of 10

Philipp Kern: Alpha: Self-service buildd givebacks

Tuesday 20th of August 2019 10:54:00 PM
Builds on Debian's build farm sometimes fail transiently. Sometimes those failures are legitimate flakes, for instance when an in-progress build happens to exhaust its resources because of other builds on the same machine. Until now, you always needed to mail the buildd, wanna-build admins or the Release Team directly in order to get the builds re-queued.

As an alpha trial I implemented self-service givebacks as a web script. As SSO for Debian developers is now a thing, it is trivial to add authentication in a way that a role account can use to act on your behalf. While at work this would all be an RPC service, I figured that a little CGI script would do the job just as well. So lo and behold, accessing
https://buildd.debian.org/auth/giveback.cgi?pkg=<package>&suite=<suite>&arch=<arch> with the right parameters set:

You are authenticated as pkern. ✓ Working on package fife, suite sid and architecture mipsel. ✓ Package version 0.4.2-1 in state Build-Attempted, can be given back. ✓ Successfully given back the package. ✓
Note that you need to be a Debian developer with a valid SSO client certificate to access this service.

So why do I say alpha? We still expect Debian developers to act responsibly when looking at build failures. A lot of times there is a legitimate bug in the package and the last thing we would like to see as a project is someone addressing flakiness by continuously retrying a build. Access to this service is logged. Most people coming to us today did their due diligence and tried reproducing the issue on a porterbox. We still expect these things to happen but this aims to cut on the round-trip time until an admin gets around to process your request, which have been longer than necessary recently. We will audit the logs and see if particular packages stand out.

There can also still be bugs. Please file them against buildd.debian.org when you see them. Please include a copy of the output, which includes validation and important debugging information when requests are rejected. Also this all only works for packages in Build-Attempted. If the build has been marked as Failed (which is a manual process), you still need to mail us. And lastly the API can still change. Luckily the state change can only happen once, so it's not much of a problem for the GET request to be retried. But it should likely move to POST anyhow. In that case I will update this post to reflect the new behavior.

Thanks to DSA for making sure that I run the service sensibly using a dedicated role account as well as WSGI and doing the work to set up the necessary bits.

Bits from Debian: salsa.debian.org: Postmortem of failed Docker registry move

Tuesday 20th of August 2019 11:20:00 AM

The Salsa admin team provides the following report about the failed migration of the Docker container registry. The Docker container registry stores Docker images, which are for example used in the Salsa CI toolset. This migration would have moved all data off to Google Cloud Storage (GCS) and would have lowered the used file system space on Debian systems significantly.

The Docker container registry is part of the Docker distribution toolset. This system supports multiple backends for file storage: local, Amazon Simple Storage Service (Amazon S3) and Google Cloud Storage (GCS). As Salsa already uses GCS for data storage, the Salsa admin team decided to move all the Docker registry data off to GCS too.

Migration and rollback

On 2019-08-06 the migration process was started. The migration itself went fine, although it took a bit longer than anticipated. However, as not all parts of the migration had been properly tested, a test of the garbage collection triggered a bug in the software.

On 2019-08-10 the Salsa admins started to see problems with garbage collection. The job running it timed out after one hour. Within this timeframe it not even managed to collect information about all used layers to see what it can cleanup. A source code analysis showed that this design flaw can't be fixed.

On 2019-08-13 the change was rolled back to storing data on the file system.

Docker registry data storage

The Docker registry stores all of the data sans indexing or reverse references in a file system-like structure comprised of 4 separate types of information: Manifests of images and contents, tags for the manifests, deduplicaed layers (or blobs) which store the actual data, and lastly links which show which deduplicated blogs belong to their respective images, all of this does not allow for easy searching within the data.

The file system structure is built as append-only which allows for adding blobs and manifests, addition, modification, or deletion of tags. However cleanup of items other than tags is not achievable within the maintenance tools.

There is a garbage collection process which can be used to clean up unreferenced blobs, however according to the documentation the process can only be used while the registry is set to read-only and unfortunately it cannot be used to clean up unused links.

Docker registry garbage collection on external storage

For the garbage collection the registry tool needs to read a lot of information as there is no indexing of the data. The tool connects to the storage medium and proceeds to download … everything, every single manifest and information about the referenced blobs, which now takes up over 1 second to process a single manifest. This process will take up a significant amount of time, which in the current configuration of external storage would make the clean up nearly impossible.

Leasons learned

The Docker registry is a data storage tool that can only properly be used in append-only mode. If you never cleanup, it works well.

As soon as you want to actually remove data, it goes bad. For Salsa clean up of old data is actually a necessity, as the registry currently grows about 20GB per day.

Next steps

Sadly there is not much that can be done using the existing Docker container registry. Maybe GitLab or someone else would like to contribute a new implementation of a Docker registry, either integrated into GitLab itself or stand-alone?

Rapha&#235;l Hertzog: Promoting Debian LTS with stickers, flyers and a video

Tuesday 20th of August 2019 10:45:36 AM

With the agreement of the Debian LTS contributors funded by Freexian, earlier this year I decided to spend some Freexian money on marketing: we sponsored DebConf 19 as a bronze sponsor and we prepared some stickers and flyers to give out during the event.

The stickers only promote the Debian LTS project with the semi-official logo we have been using and a link to the wiki page. You can see them on the back of a laptop in the picture below. As you can see, we have made two variants with different background colors:

The flyers and the video are meant to introduce the Debian LTS project and to convince companies to sponsor the Debian LTS project through the Freexian offer. Those are short documents and they can’t explain the precise relationship between Debian LTS and Freexian. We try to show that Freexian is just an intermediary between contributors and companies, but some persons will still have the feeling that a commercial entity is organizing Debian LTS.

Check out the video on YouTube:

The inside of the flyer looks like this:

Click on the picture to see it full size

Note that due to some delivery issues, we have left-over flyers and stickers. If you want some to give out during a free software event, feel free to reach out to me.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

More in Tux Machines

Late Coverage of Confidential Computing Consortium

  • Microsoft Partners With Google, Intel, And Others To Form Data Protection Consortium

    The software maker joined Google Cloud, Intel, IBM, Alibaba, Arm, Baidu, Red Hat, Swisscom, and Tencent to establish the Confidential Computing Consortium, a group committed to providing better private data protection, promoting the use of confidential computing, and advancing open source standards among members of the technology community.

  • #OSSUMMIT: Confidential Computing Consortium Takes Shape to Enable Secure Collaboration

    At the Open Source Summit in San Diego, California on August 21, the Linux Foundation announced the formation of the Confidential Computing Consortium. Confidential computing is an approach using encrypted data that enables organizations to share and collaborate, while still maintaining privacy. Among the initial backers of the effort are Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. “The context of confidential computing is that we can actually use the data encrypted while programs are working on it,” John Gossman, distinguished engineer at Microsoft, said during a keynote presentation announcing the new effort. Initially there are three projects that are part of the Confidential Computing Consortium, with an expectation that more will be added over time. Microsoft has contributed its Open Enclave SDK, Red Hat is contributing the Enarx project for Trusted Execution Environments and Intel is contributing its Software Guard Extensions (SGX) software development kit. Lorie Wigle, general manager, platform security product management at Intel, explained that Intel has had a capability built into some of its processors called software guard which essentially provides a hardware-based capability for protecting an area of memory.

Graphics: Mesa Radeon Vulkan Driver and SPIR-V Support For OpenGL 4.6

  • Mesa Radeon Vulkan Driver Sees ~30% Performance Boost For APUs

    Mesa's RADV Radeon Vulkan driver just saw a big performance optimization land to benefit APUs like Raven Ridge and Picasso, simply systems with no dedicated video memory. The change by Feral's Alex Smith puts the uncached GTT type at a higher index than the visible vRAM type for these configurations without dedicated vRAM, namely APUs.

  • Intel Iris Gallium3D Is Close With SPIR-V Support For OpenGL 4.6

    This week saw OpenGL 4.6 support finally merged for Intel's i965 Mesa driver and will be part of the upcoming Mesa 19.2 release. Not landed yet but coming soon is the newer Intel "Iris" Gallium3D driver also seeing OpenGL 4.6 support. Iris Gallium3D has been at OpenGL 4.5 support and is quite near as well with its OpenGL 4.6 support thanks to the shared NIR support and more with the rest of the Intel open-source graphics stack. Though it's looking less likely that OpenGL 4.6 support would be back-ported to Mesa 19.2 for Iris, but we'll see.

The GPD MicroPC in 3 Minutes [Video Review]

In it I tackle the GPD MicroPC with Ubuntu MATE 19.10. I touch on the same points made in my full text review, but with the added bonus of moving images to illustrate my points, rather than words. Read more Also: WiringPi - Deprecated

today's howtos