Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 11 hours 43 min ago

Russ Allbery: Review: Rogue Protocol

Friday 20th of December 2019 03:54:00 AM

Review: Rogue Protocol, by Martha Wells

Series: Murderbot Diaries #3 Publisher: Tor.com Copyright: August 2018 ISBN: 1-250-18543-2 Format: Kindle Pages: 150

This is the third Murderbot novella. It could probably be read on its own, since each is a self-contained story, but reading in order will add some depth via the increasing thoughtfulness of Murderbot's motives.

There needs to be an error code that means "I received your request but decided to ignore you."

Murderbot is trying to get out of the Corporation Rim. Its former owner, GrayCris, is entangled in litigation over its sketchy actions (told in the previous novellas), including a failed terraforming attempt outside of the Corporation Rim. This is a convenient combination. Murderbot can get out of the Rim and away from potential pursuers while looking around the terraforming attempt for evidence that could hurt GrayCris. And possibly also give its primary rescuer from All Systems Red less justification to be fighting dangerous corporations and more reason to go home where she will be safe.

That's how Murderbot ends up as unexpected passenger security on a trip to HaveRotten station, giving rise to the above quote (and several other great moments).

Starting at HaveRotten station, Rogue Protocol follows a similar path as Artificial Condition: Murderbot picks up some humans on the way to its objective (in this case, the team from the company that took over the failing terraforming station and is surveying it), can't resist trying to protect them, and ends up serving as security because, well, someone has to. This time around, that comes with irritated and disgusted criticism of the failings of the human security that is supposed to be doing that job. That was the best part of the book. The situation isn't quite what it appears to be on the surface, of course, which leads to some tense and exciting tactical maneuvering on an abandoned station against daunting odds.

The new element of Rogue Protocol is Miki, another humanform robot. I had mixed feelings about Miki. I think this was intentional — Murderbot also has mixed feelings about Miki — but I'm still not sure if I liked the overall effect. It is more naive and simple-minded than Murderbot, but is the friend of one of the humans. Murderbot, and the reader, are initially suspicious that "pet" may be a better word than "friend," but that's not quite the case. It's a disturbing look at another option for sentient robots in this universe other than simple property, one that's better in some ways, and which seems to work for Miki, but is nonetheless ripe for abuse.

Miki is central to the emotional thrust of the novella, and I can't argue this didn't work on me. I think the reason why I have some lingering discomfort is that Miki is right on the border of a slave who wants to be a slave, belongs to someone who doesn't quite treat it like one (but could), and (unlike Murderbot) is probably incapable of deciding to be something else. I'm sure this was intentional on Wells part; a primary theme of this series is the nature of self-determination in a universe that treats you like property. It's also a long-standing SF theme that's fair game to explore. But it still bothered me the more I thought about it, and I'm not sure Miki's owner/friend, or this novella, fully engages with the implications.

That element bumped my enjoyment of this entry of the series a little lower, but this is still solidly entertaining stuff. Murderbot's internal critique of other people's security decisions is worth the price of entry by itself, and I'm still delighted by its narrative voice. I continue to recommend this whole series.

Followed by Exit Strategy.

Rating: 7 out of 10

Louis-Philippe Véronneau: Lowering the minimum volume on Android

Friday 20th of December 2019 03:00:38 AM

I like music a lot and I spend a large chunk of my time with either over-the-ear headphones or in-ear monitors on. Sadly, human ears are fragile things and you should always try to keep the volume as low as possible when using headphones.

When I'm not at home, I usually stream music from my home server to my Android device. Although the DAC quality on my Nexus 5 isn't incredible, my IEMs aren't stellar either.

Sadly, I've always been displeased with how little control Android actually gives you over the media volume and I've always wished I could have the lowest setting even lower.

I know a bunch of proprietary apps exist on the Google Play Store to help you achieve that kind of thing, but hey, I don't swing that way.

Android Audio Policies

After having a look at Android's audio policies documentation, it turns out if you have a rooted device, you can define the audio level curve yourself!

On a recent-ish version of Android, the two files you want to mess with are:

  • /vendor/etc/audio_policy_volumes.xml, which defines what type of audio stream (media, phone calls, earbuds, bluetooth, etc.) uses what type of audio curve.

  • /vendor/etc/default_volume_tables.xml, which defines the default audio curves referenced in the previous file.

If you've never modified files on Android, I highly recommend plugging your device to a computer, enabling USB debugging and connecting through adb. You will likely need to remount the filesystem, as it's in read-only mode by default:

$ adb shell $ su $ mount -o remount,rw /system

I don't really care about anything else than media volume, so here is the curve I ended up with. It goes very low and gives you more control at low volume, while still being quite loud at maximum volume. You will need to experiment with your device though, as DACs are all different.

<reference name="DEFAULT_MEDIA_VOLUME_CURVE"> <!-- Default Media reference Volume Curve --> <point>1,-9000</point> <point>10,-8000</point> <point>20,-7000</point> <point>30,-6000</point> <point>40,-4000</point> <point>60,-3000</point> <point>100,-2000</point> </reference>

For reference, the scale goes from -9600 to 0, 0 being the loudest sound your device can produce.

As all things Android, if you are not building your own images, this will get erased next time you update your device. Don't forget to backup the files you modify, as audio curves are easy to screw up!

Shirish Agarwal: 100 million Indians, no hope and future

Thursday 19th of December 2019 11:37:50 PM

What will happen when you have a 100 million Indians in the productive age of 14-40 are not working, neither looking for work, neither training or have any hopes that they will get any jobs. This is the India that most Indians are inheriting which has been shared in a recent Govt. report released about a month back.

The Report Annual Report, PLFS 2017-18_31052019Download

If you look at the Report, it seems to be a humongous 600 pages + report but it has been buffed up by interspering the hindi translation within the report itself. Now, while I haven’t gone fully through the report the numbers themselves seem to be shocking. There are other numbers related to women participation in labor force which has plummetted which also is a huge cause for worry. It is estimated that the original numbers were 10 million who were disillussioned in 2010-2011 financial year for labor markets or job creation. Mind you, these are all Government figures. So what happened in the past 6-7 years to have such shocking numbers due to which you see divisive steps such as CAA (Citizen Amendment Act) being taken for which protests have been taken place across the country. I believe there are at least 6-7 major issues for which the present Government doesn’t seem to have time to fix and doesn’t even seem to have any ideas or seriousness as to how to fix them.

Demonetization

The first one right there is demonetization which the current Government fails to acknowledge as it mistake. While hindsight is always 20/20 while it didn’t do any of things promised to the citizens, it made sure that rural markets, startups and small businesses went out of business where most of the exchange happens by cash. Ironically just 2 years after demonetization, the amount of money in cash was more than before, which means those who generated black money were still in good business. In fact, as I have shared before, Dr. Arun Kumar who has done lot of work on black money and black economy, been part of 40 odd committees and did generate and use black money as he has confessed especially in real estate in early part of his career, has written two books recently on the topic –

Understanding the Black Economy and Black Money in India – An Enquiry into Causes, Consequences and Remedies – Dr. Arun Kumar , 6 Feb 2017

Demonetization and the Black Economy – Dr. Arun Kumar, 20th December 2017 – this one gives a far larger picture of how Demonetization failed to live upto its promises and what it didn’t take into account.

The Big Reverse: How Demonetization kicked India out : Meera Sanyal – 10th November 2018

There is nothing I can write that what these economists have written about it and in greater detail than anything else I would write hence would suggest you to go through them. I have given the amazon.in links you may use others. In fact, interestingly, just couple of days ago in a business channel, there was talk on commodity markets and one of the big jewellers had shared this on national media that due to Govt. imposition of high GST, there has been gold smuggling and people buying gold in black (i.e. without receipts) . FWIW, this was on CNBC TV 18.

Goods and Services Tax

About this I have covered in the last blog post so nothing much to reiterate here except link back. Although have to say GST is still hurting people, a lot. GST refunds is still an issue as shared by the finance minister. Although nowadays trusting any numbers including the PLFS numbers should be taken with a bucket of salt as the present Government attempts to always present a rosy picture rather than the real picture. So the numbers for people not looking for jobs as well as people whose refunds have not been given would probably be much more than what has been shared. With the Statistical Commission not having enough members of high quality and quantity how good the numbers are anybody’s guess as have shared before. And in fact, now whatever autonomy, the statistical commission had is also being eroded as can be seen in a draft bill. This will only lower India’s stature in global eyes.

Electoral Bonds

I don’t really have to say much in it except the reports by Nithin Sethi . While the Government has sought to placate the masses by its submissions in Rajya Sabha it hasn’t answered any of the questions raised by either Nithin Sethi or any of the findings by the RTI answers received by the Colonel. There is lot more to the story than still meets the eye, but probably this on some other day. This is apart from the fact that India is now funding lobbyists in U.S. so that Americans support India’s actions in Kashmir. This is after Americans found Indi actions baffling in Kashmir and Americans have lot of experience in enemy engagement.

Retrospective Tax

This is perhaps one of the things which I had shared in the last blog post as well. This is what has scared most potential international investors away. In fact, Bloomberg had shared a nice well-written article about the issue and also links to Mr. Harish Salve’s take who has been an unapologetic critic of the move by the then Congress Govt. which the BJP Govt. had promised to fix and since then has done nothing in that regard.

Rural demand and high agricultural prices and middleman

There has been no uptake in rural demand and there is no policy by the Govt to tackle this. Couple of months back the FM gave 1.45 lakh crore or $20 billion dollar tax bonanza to corporate houses which make a measly 3-4% of the total economy and are already swimming in cash, while the other 96% of the economy which actually oils the Indian market which is the small businesses, the farmers who are net loosers in the current regime. Even essential commodities prices have gone up both in retail and wholesale markets with almost all of the profits acruing to the middleman rather than the farmer or the agricultural labor . We are on the path of being England which imports all of its veggies. Last not but not the last exports have been down from India for straight fourth month.

Conclusion

Unless India fixes lot of structural issues for e.g. adherance to legal contracts or fast resolution in case of issues, don’t see India bouncing back anytime soon. Nobody from the other side even comments why economies of Bangladesh, Vietnam, China and even Cambodia are able to ramp up their economies even if the argument is ‘global slowdown’ . Some people have argued for cyclical slowdown but haven’t had any evidence to prove that other than conjecture.

Bastian Blank: Introducing dpkg source format for git repositories

Thursday 19th of December 2019 10:00:00 PM

There is a large disagreement inside Debian on how a git repository used for Debian packages should look like. You just have to read debian-devel to get too much of it.

Some people prefer that the content of the repository looks like a "modern" (3.0) source package the Debian archive accepts. This means it includes upstream source and stuff in debian/patches that need to get applied first to have something usable. But by definition this concept is incompatible with a normal git workflow; e.g. you can't use cherry-pick of upstream patches, but need to convert it into a patch file either by hand or with another tool. It also can't use upstream test definitions using a CI without adopting it and patching source first.

Other people prefer to have a complete patched source available always. This allows for use of cherry-pick and all the other git concepts available. But due to the way our "modern" (3.0) source formats are definied, it is impossible to use those together. So everyone wanting to use this can only use ancient 1.0 source packages, which lack a lot of features like modern compression formats.

Some do stuff that is much more weird. Weird things like git-dpm, which is also incompatible with merges. But we can't save everyone.

I started working on bridging the gap between a proper git repository and modern Debian source package by building a source package from some special target in debian/rules. But people maybe rightfully complained that not be able to use dpkg-source got a big downside and needs a lot of documentation. To get this into proper shape, I'd like to introduce a new dpkg source format.

New source format

The package dpkg-source-gitarchive defines a new source format 3.0 (gitarchive). This format doesn't represent a real source package format, but uses a plain git repository as input to create a 3.0 (quilt) source package that the Debian archive accepts.

This software got some currently hardcoded expectations on how things look like:

  • The git tree must not contain debian/patches, instead all Debian patches are applied as git commits.
  • The file debian/source/format exists in the latest commit, not only in an uncommited file.
  • Original tarballs needs to be managed with pristine-lfs and must be compress using xz.
  • Tags for upstream sources called upstream/$version need to exist.

Implementing this as a dpkg source format allows for a better user experience. You can build a new source package both from the git repository and an existing source package by using dpkg-source/dpkg-buildpackage and don't need to use special tools. No special tools or special targets in debian/rules are needed to manage sources to be uploaded.

Open questions
  • Is it a good idea to implement a pretty specific set of requirements as dpkg source format and make it an interface?
  • Is enforcing a repository based handling of upstream tarballs, for now only with pristine-lfs a good idea?
  • Should it also handle debian version tags?

*edit: package name was changed from dpkg-format-gitarchive to dpkg-source-gitarchive.

Junichi Uekawa: gitlstreefs test now passes.

Thursday 19th of December 2019 10:56:02 AM
gitlstreefs test now passes. I think what was different was that cloud build environment triggered from github is different in that the source tree does not include the .git directory. gitlstreefs depended on the git trees for some tests and didn't pass.

Enrico Zini: Python profiling data

Wednesday 18th of December 2019 07:33:32 AM

Python comes with a built-in way of collecting profile information, documented at https://docs.python.org/3/library/profile.html.

In a nutshell, it boils down to:

python3 -m cProfile -o profile.out command.py arguments…

This post is an attempt to document what's in profile.out, since the python documentation does not cover it. I have tried to figure this out by looking at the sources of the pstats module, the sources of gprof2dot, and trying to make sense of the decoded structure, and navigate it.

Decoding Python profile data

The data collected by python's profile/cProfile is a data structure encoding using the marshal module1.

Loading the data is simple:

>>> import marshal >>> with open("profile.out", "rb") as fd: >>> data = marshal.load(fd) Structure of Python profile data

Decoded profile data is a dictionary indexed by tuples representing functions or scopes:

# File name, Line number, Function or scope name Func = Tuple[str, int, str]

For example:

("./staticsite/page.py", 1, "<module>") ("./staticsite/features/markdown.py", 138, "load_dir") ('~', 0, "<method 'items' of 'collections.OrderedDict' objects>") ('/usr/lib/python3.7/pathlib.py', 1156, 'stat')

('~', 0, …) represents a built-in function.

The signature of the decoded dict seems to be:

Dict[ # Function of scope Func, Tuple[ # Aggregated statistics int, int, float, float, # Callers Dict[ # Caller function or scope Func, # Call statistics collected for this caller → callee Tuple[int, int, float, float] ] ] ]

The four int, int, float, float statistics are normally unpacked as cc, nc, tt, ct, and stand for:

  • cc: number of primitive (non recursive) calls. Sometimes listed as pcalls.
  • nc: total number of calls (inculding recursive), sometimes listed as ncalls.
  • tt: total time, sometimes listed as tottime: total time spent in the given function, and excluding time made in calls to sub-functions.
  • ct: cumulative time, Sometimes listed as cumtime: cumulative time spent in this and all subfunctions (from invocation till exit). This figure is accurate even for recursive functions.

Phrases in italic are quotations from profile's documentation.

Building a call graph

This is some example code to turn the collected statistics into a call graph structure:

class Node: """ Node in the caller → callee graph """ def __init__(self, func: Func): self.fname, self.lineno, self.scope = func self.callees: List["Call"] = [] self.callers: List["Call"] = [] self.cc = 0 self.nc = 0 self.tt = 0.0 self.ct = 0.0 def __str__(self): # Builtin function if self.fname == "~" and self.lineno == 0: return f"[builtin]:{self.scope}" # Shorten file names from system libraries self.shortname = self.fname for path in sorted(sys.path, key=lambda x: -len(x)): if not path: continue if self.fname.startswith(path): return f"[sys]{self.fname[len(path):]}:{self.lineno}:{self.scope}" # File in the local project return f"{self.fname}:{self.lineno}:{self.scope}" class Call: """ Arc in the caller → callee graph """ def __init__(self, caller: Node, callee: Node, cc: int, nc: int, tt: float, ct: float): self.caller = caller self.callee = callee self.cc = cc self.nc = nc self.tt = tt self.ct = ct class Graph: """ Graph of callers and callees. Each node in the graph represents a function, with its aggregated statistics. Each arc in the graph represents each collected caller → callee statistics """ def __init__(self, stats: Stats): # Index of all nodes in the graph self.nodes: Dict[Func, Node] = {} # Total execution time self.total_time = 0.0 # Build the graph for callee, (cc, nc, tt, ct, callers) in stats.items(): self.total_time += tt # Get the callee and fill its aggregated stats ncallee = self.node(callee) ncallee.cc = cc ncallee.nc = nc ncallee.tt = tt ncallee.ct = ct # Create caller → callee arcs for caller, (cc, nc, tt, ct) in callers.items(): ncaller = self.node(caller) call = Call(ncaller, ncallee, cc, nc, tt, ct) ncallee.callers.append(call) ncaller.callees.append(call) def node(self, fun: Func) -> Node: """ Lookup or create a node """ res = self.nodes.get(fun) if res is None: res = Node(fun) self.nodes[fun] = res return res @classmethod def load(cls, pathname: str) -> "Graph": """ Builds a Graph from profile statistics saved on a file """ with open(pathname, "rb") as fd: return cls(marshal.load(fd)) @classmethod def from_pstats(cls, stats: pstats.Stats) -> "Graph": """ Builds a Graph from an existing pstats.Stats structure """ return cls(stats.stats) Other python profiling links
  • pyinstrument: call stack profiler for Python
  • austin: Python frame stack sampler for CPython
  • py-spy: Sampling profiler for Python programs
  • FlameGraph: stack trace visualizer
  1. marshal does not guarantee stability of the serialized format across python versions. This means that you can't, for example, use a python2 tool to examine profiling results from a python3 script, or vice versa. See for example Debian bug #946894

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, November 2019

Tuesday 17th of December 2019 02:47:37 PM

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports In November, 248.50 work hours have been dispatched among 14 paid contributors. Their reports are available: Evolution of the situation

November was a quieter month again, most notably ‘#922246: www/lts: if DLA-1234-1 and DLA-1234-2 exist, only that last one shows up in indexes‘ was fixed, so that finally all DLAs show up on www.debian.org as they should.

We currently have 58 LTS sponsors each month sponsoring 215h. This month we are pleased to welcome the University of Oxford, TouchWeb and Dinahosting among our sponsors. It’s particularly interesting to see hosting providers that are creating financial incentives to migrate to newer versions: customers that don’t upgrade have to pay an extra amount which is then partly given back to Debian LTS.

The security tracker currently lists 35 packages with a known CVE and the dla-needed.txt file has 30 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Charles Plessy: ... meanwhile, in the BTS

Tuesday 17th of December 2019 02:32:24 PM

An while people still debate about legitimate aggression in our mailing lists, we still get offensive bug reports via the BTS...

Dirk Eddelbuettel: BH 1.72.0-2 on CRAN

Tuesday 17th of December 2019 12:22:00 PM

Yesterday’s release of BH 1.72.0-1 was so much fun we decided to do it again :)

More seriously, and as mentioned, we have to do some minor adjustments as required by CRAN. One is to ensure all filenames fit with their full paths into a shorter limit imposed by an ancient tar standard. So I always rename inst/include/boost/numeric/odeint/stepper/generation/karp54_classic.hpp by shortening it to .../karp54_cl.hpp and adjust the one file that includes this internal file. Not a big deal, and done for years.

But this time, and inadvertendly, I also renamed a similarly-named file one directory higher. And it gets included by some other files, which then fail and bark. My thanks to Alexey Shiklomanov for noticing this, letting me know, and testing a fixed package. I now wish his ODE-solving package was already on CRAN so that I’d known sooner ;-) as seemingly of the current 192 reverse dependencies, none are doing ODE maths.

No other changes, and sorry for the double download of both 1.72.0-1 and 1.72.0-2 (if you were fast enough to catch the -1 file).

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Kai-Chung Yan: What If XcodeGhost Hits Android...

Tuesday 17th of December 2019 02:50:17 AM

XcodeGhost broke out some time ago, but I didn’t know about this because I don’t use any Apple products until an email by one of my colleagues from Debian drew my attention.

Let me briefly describe the event of XcodeGhost outbreak. Because of China Communist Party’s Great Firewall, iOS developers in China are not easily to or even not able to download Xcode from official locations, they therefore find other ways like downloading it from messed up places like Baidu cloud drive, which unfortunately contained XcodeGhost virus and could build affected Apps. As a result, most affected Apps are from China.

I can’t stop thinking of such security event will happen to Android. Everything of Google is banned in China nowadays, Android developers in China also obviously can’t download Android SDK from official locations, and I don’t believe all of them are willing to pay for a stable VPN, not to mention the government claims to disable all VPNs. So it’s imaginable that some day there will be a similar virus outbreak caused by affected Android SDK.

I think I will have to share the responsibility if this day has come. I’m a member of Debian Android Tools team, working on getting Android SDK into Debian. Although we are currently still in a initial stage, once we finish our work Android developers using Debian or Ubuntu will be able to safely and conveniently get official Android SDK from official software repositories which are not censored in China. However since I usually lack of spare time, we are still stuck at the discussion with f2fs-tools developers, thus we can’t even finish adb and fastboot yet. If XcodeGhost-like virus happens to Android, it’s my laziness that prevent us from stopping it.

We’ve submitted patches to f2fs-tools team, and one of our core members discussed in the email mentioned at the beginning about letting our nonfree packages download Android SDK files from China’s mirror if it can’t access Google. I hope we can finish our job as soon as possible.

Kai-Chung Yan: How to Contribute to Android SDK in Debian

Tuesday 17th of December 2019 02:50:17 AM

A bit of background: This article was written to provide freshman guide for students who was applying for the “Android SDK Tools in Debian” project in Outreachy.

Android SDK is available in Debian. With these tools, Android developers are able to build their apps with pure open source software that has neither limitation on usage nor probable scary backdoors. However the packages are usually out-of-date and do not cover all components of the SDK due to the huge size of AOSP and a lack manpower. If you are interested in help us, you may find this article helpful.

This isn’t a detailed tutorial. Instead, it’s a guide for you to know about the overall workflow and where to start.

Where to Find Us

You can find some information about the team here. IRC and mailing list are how we communicate mostly.

IRC

IRC is a kind of anonymous group chat technology invented a great many years ago. It is the preferred way of meeting for surprisingly most of the open source projects in the world. In order to join an IRC channel, use any IRC client (e.g. Polaris and IRCCloud) to login an IRC server, choose a nickname, then you can enter a channel you find interested.

Debian has an official IRC server (which is actually OFTC): irc.debian.org. You can find us in the #debian-mobile and #debian-android-tools channels. We usually talk in the first one and check out Git repository notifications in the second one. My nickname is seamlik, and _hc is the team founder.

Mailing list

In case you don’t already know what an mailing list is, imagine it’s also a group chat platform but for emails. Any email address that subscribe to the list address will receive all emails sent to the list address. Our team also has a mailing list where we make announcements and discussions that we prefer to be publicly recorded. Visit the mailing list webpage to subscribe to it.

Git Repositories

We host all our code on Debian’s infrastructure (Alioth, though it is being replaced at the time). There are also mirrors on GitHub, though we do not accept pull request there because that would be beyond Debian’s workflow.

Learn to Maintain Debian Packages

The android-tools team (our team

Kai-Chung Yan: Google Summer of Code Started: Packaging Android SDK for Debian

Tuesday 17th of December 2019 02:50:17 AM

And here it is: I am accepted as a GSoC 2015 student! Actually this has been a while since the result was out in the end of April. When I was applying for this GSoC, I never expected I could be accepted.

So what is Google Summer of Code, in case someone hasn’t heard about it at all? Google Summer of Code is an annual activity hosted by Google which gathers college students around the world to contribute to open source softwares. Every year hundreds of open source organizations join GSoC to provide project ideas and mentors, and thousands of students apply to and choose a project and work on it during the summer, and get well paid by Google if they manage to finish the task. This year we have 1051 students accepted with 49 from China and 2 from Taiwan. You can read more details from this post.

Although it says so from Geography textbooks and my Geography teacher, I had been not believing that India is a software giant, until I saw that India has the most students accepted and my partner on this project is a girl from India!

Project Details

The project we will work on this summer is to package Android SDK into Debian. In addition to that, we wil also update the existing packages that is essential to Android development, e.g. Gradle. Although some may say this project is not quite complicated, it still has lots of work to do, which makes it a rather large project that has two students working on it and a co-mentor. My primary mentor is Hans-Christoph Steiner from The Guardian Project and he also wrote a post about the project.

Why do we need to do this? There are reasons on security, convenience and ideal, but the biggest one for me is that if you use Linux and you write Android apps, or perhaps you are just ready to flash your device a CyanogenMod, there will be no better way than to just type sudo aptitude install adb. More infomation on this project can be found on Debian’s Android Tools Team page.

Problems We Are Facing

Currently (mid May) the offical beginning of coding phase has not yet arrived, but we have made a meeting on IRC and confirmed the largest problems we have so far.

The first problem is the packaging of Gradle. Gradle is a rather new and innovating build automation system, with which most Android apps and the Android SDK tools written in Java are built. It is a building system, so unsurprisingly it is built with itself. In this case, updating Gradle is much harder. Currently Gradle is version 2.4 but the one in Debian is 1.5. In the worst cases, we have to build all versions of Gradle from 1.6 to 2.4 one by one due to its self-dependency.

In reality, building a project with Gradle is way more easier and happier than any other build system because it handles the dependency in a brilliant way by downloading everything it needs, including Gradle itself. Thus it does not matter if you have installed Gradle or even if you are using Linux or Windows. However when building the Debian package, it seems that we have to abandoned the convenience and make it totally offline and rely only on the things in Debian. This is for security and reproducibility but the packaging will be much more complicated since we have to modify lots of code in the build scripts from upstream source. Also in such case, since the building is restricted to rely on the existing things in a Debian system, quite a few plugins that uses softwares that isn’t in Debian yet will be excluded from the Debian version of Gradle, which makes it less usable than simply launching the Gradle wrapper. In that case, I suppose there will be very few people really using the Gradle in Debian repository.

The second problem is how to determine which Git commit we should checkout from the Android SDK repository to build a particular version of the tools. Android SDK does not release its source code in tarball form, so we have to deal with the Git repository. What’s worse, the tools in Android SDK come from different repositories, and they have almost no infomation on the tools’ version number at all. We can’t confirm which commit or tag or branch in the repository corresponds to a particular version. And what’s way worse, Android SDK has 3 parts being SDK-tools, Build-tools and Platform-tools, each of which has defferent version numbers! And what’s way way worse, I have posted the question to various places and no one had answered me.

After our IRC discussion, we have been focusing on Gradle. I am still reading documentations about Debian packaging and using Gradle. All I hope now is that we can finish the project nice and fast and no pity will be left in this summer. Also I hope my GSoC T-shirt will be delivered to my home as soon as possible, it’s really cool!

Do You Want to Join GSoC as Well?

Surprisingly, most students in my school haven’t heard about Google Summer of Code at all, that is why there are only 2 accepted students from Taiwan. But if you know it and you study computer science (or in other ridiculous department related to computer science just like mine), do not hesitate and join the next year’s! Contribute to open source, and get highly paid (5500 USD this year), is it not really cool? Here I am offering you several tips.

Before I applied my proposal, I saw a guy from KDE wrote some tips with a shocking title. Reading that is enough I guess, but I still need to list some points:

  • Contact your potantial mentors even before you are writing your proposal, that really helps.
  • Remember to include a rough schedule in your proposal, it is very important.
  • Be interative to your mentor, ask good questions often.

Have fun in the summer!

Kai-Chung Yan: Introducing Gradle 1.12 in Debian

Tuesday 17th of December 2019 02:50:17 AM

After 5 weeks of work, my colleague Komal Sukhani and I succeeded in bringing Gradle 1.12 with other packages into Debian. Here is a brief note of what we’ve done:

Note that both Gradle and Groovy are in experimental distribution because Groovy build-depends on Gradle, and Gradle build-depends on bnd 2.1.0, which is in experimental as well.

Updating these packages takes us an entire month because my summer vacation had not come yet until the day we uploaded Gradle and Groovy, which means we were doing the job in our spare time (Sukhani finished her semester at the beginning though).

Next step is to update Gradle to 2.4 as soon as possible because Sukhani has started her work on the Java part of Android SDK, which requires Gradle 2.2 or above. Before updating Gradle I need to package the Java SDK for AWS, which enables Gradle to access S3 resources. I also need to make gradle-1.12 as a separate package and use it to build gradle_2.4-1.

After that, I will start my work on the C/C++ part of Android SDK, which is far more complicated and messy than I had expected. Yet I enjoy the summer coding. Happy coding, all open source developers!

Finally, feel free to check out my weekly report in Debian’s mailing list:

Kai-Chung Yan: Attending FOSDEM 2016: First Touch in Open Source Community

Tuesday 17th of December 2019 02:50:17 AM

FOSDEM 2016 happened at the end of January, but I have been too busy to write my first trip to an open source event.

FOSDEM takes place in Belgium, which is almost ten thousand kilometers from my home. Luckily, Google kindly offered sponsorship for traveling to Belgium and lodging places for former GSoC mentors and students in Debian, which made my travel possible without giving my dad headaches. Thank you Google!

Open source meetings are really fun. Imagine you have been working hard on an exciting project with several colleagues around the world who have never met you, and now you have a chance to meet them and make friends with them, cool! However I am not involved with any project too deeply, so I don’t have too much expectations on this. But I’m still excited when I first saw my mentor Hans-Christoph Steiner! Pity that we forgot to take a picture, as I’m not those kind of people who like to take selfies every day.

One of the most interesting projects I saw during FOSDEM is Ring. Ring is a distributed communication software without central servers. All Ring clients in the world are connected to several others and find a particular user using a distributed hashtable. A Ring client is a key pair, whose public key serves as the ID. Thus, Ring is anti-censorshiping, anti-eavesdropping, which is great for China citizens and feared by the China government. After I got home I knew another similar but older project Tox, which seems to more feature-rich than Ring but still not sufficient for promoting it. There’s a huge disadvantage of both project, which is high battery drainage on Android. Hope someday they will improve it.

At the end of FOSDEM I joined the volunteers to do the clean up. We cleaned all the buildings, restored the rooms and finally shared the dinner at the hall of K Building. I’m not a European so I didn’t talk too much to them, but this is really an unforgettable experience. Hope I can join the next FOSDEM soon.

Dima Kogan: C++ probes with perf

Monday 16th of December 2019 09:19:00 PM

The Linux perf tool can be used to (among many other things!) instrument user-space code. Dynamic probes can be placed in arbitrary locations, but in my usage, I almost always place them at function entry and exit points. Since perf comes from the Linux kernel, it supports C well. But sometimes I need to deal with C++, and perf's incomplete support is annoying. Today I figured out how to make it sorta work, so I'm writing it up here.

For the record, I'm using perf 4.19.37 from linux-base=4.6 on Debian, on amd64.

Let's say I have this not-very-interesting C++ program in tst.cc:

#include <stdio.h> namespace N { void f(int x) { printf("%d\n", x); } } int main(int argc, char* argv[]) { N::f(argc); return 0; }

It just calls the C++ function N::f(). I build this:

$ g++ -o tst tst.cc

And I ask perf about what functions are instrument-able:

$ perf probe -x tst --funcs N::f completed.7326 data_start deregister_tm_clones frame_dummy main printf@plt register_tm_clones

Here perf says it can see N::f (it demangled the name, even), but if I try to add a probe there, it barfs:

$ sudo perf probe -x tst --add N::f Semantic error :There is non-digit char in line number.

The reason is that perf's probe syntax uses the : character for line numbers, and this conflicts with the C++ scope syntax. perf could infer that a :: is not a line number, but nobody has written that yet. I generally avoid C++, so I'm going to stop at this post.

So how do we add a probe? Since perf can't handle mangled names, we can ask it to skip the demangling:

$ perf probe -x tst --funcs --no-demangle completed.7326 data_start deregister_tm_clones frame_dummy main printf@plt register_tm_clones

But now my function is gone! Apparently there's a function filter that by default throws out all functions that start with _. Let's disabled that filter:

$ perf probe -x tst --funcs --no-demangle --filter '*' _DYNAMIC _GLOBAL_OFFSET_TABLE_ _IO_stdin_used _ZN1N1fEi __FRAME_END__ __TMC_END__ __data_start __do_global_dtors_aux __do_global_dtors_aux_fini_array_entry __dso_handle __frame_dummy_init_array_entry __libc_csu_fini __libc_csu_init _edata _fini _init _start completed.7326 data_start deregister_tm_clones frame_dummy main printf@plt register_tm_clones

Aha. There's my function: _ZN1N1fEi. Can I add a probe there?

# perf probe -x tst --add _ZN1N1fEi Failed to find symbol _ZN1N1fEi in /tmp/tst Error: Failed to add events.

Nope. Apparently I need to explicitly tell it that I'm dealing with unmangled symbols:

# perf probe -x tst --add _ZN1N1fEi --no-demangle Added new event: probe_tst:_ZN1N1fEi (on _ZN1N1fEi in /tmp/tst) You can now use it in all perf tools, such as: perf record -e probe_tst:_ZN1N1fEi -aR sleep 1

There it goes! Now let's add another probe, at the function exit:

# sudo perf probe -x tst --add _ZN1N1fEi_ret=_ZN1N1fEi%return --no-demangle Added new event: probe_tst:_ZN1N1fEi_ret__return (on _ZN1N1fEi%return in /tmp/tst) You can now use it in all perf tools, such as: perf record -e probe_tst:_ZN1N1fEi_ret__return -aR sleep 1

And now I should be able to run the instrumented program, and to see all the crossings of my probes:

# perf record -eprobe_tst:_ZN1N1fEi{,_ret__return} ./tst 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.017 MB perf.data (2 samples) ] # perf script tst 6216 [001] 834490.086097: probe_tst:_ZN1N1fEi: (559833acb135) tst 6216 [001] 834490.086196: probe_tst:_ZN1N1fEi_ret__return: (559833acb135 <- 559833acb172)

Sweet! Again, note that this is perf version 4.19.37, and other versions may behave differently.

Simon Josefsson: Passive Icinga Checks: icinga-pusher

Monday 16th of December 2019 07:19:56 PM

I use Icinga to monitor the availability of my Debian/OpenWRT/etc machines. I have relied on server-side checks on the Icinga system that monitor the externally visible operations of the services that I care about. In theory, monitoring externally visible properties should be good enough. Recently I had one strange incident that was due to an out of disk space on one system. This prompted me to revisit my thinking, and to start monitor internal factors as well. This would allow me to detect problems before they happen, such as an out of disk space condition.

Another reason that I only had server-side checks was that I didn’t like the complexity of the Icinga agent nor wanted to open up for incoming SSH connections from the Icinga server on my other servers. Complexity and machine-based authorization tend to lead to security problems so I prefer to avoid them. The manual mentions agents that use the REST API which was that start of my journey into something better.

What I would prefer is for the hosts to push their self-test results to the central Icinga server. Fortunately, there is a Icinga REST API in modern versions of Icinga (including version 2.10 that I use). The process-check-result API can be used to submit passive check results. Getting this up and running required a bit more research and creativity than I would have hoped for, so I thought it was material enough for a blog post. My main requirement was to keep complexity down, hence I ended up with a simple shell script that is run from cron. None of the existing API clients mentioned in the manual appealed to me.

Prepare the Icinga server with some configuration changes to support the push clients (replace blahonga with a fresh long random password).

icinga# cat > /etc/icinga2/conf.d/api-users.conf object ApiUser "pusher" { password = "blahonga" permissions = [ "actions/process-check-result" ] } ^D icinga# icinga2 feature enable api && systemctl reload icinga2

Then add some Service definitions and assign it to some hosts, to /etc/icinga2/conf.d/services.conf:

apply Service "passive-disk" { import "generic-service" check_command = "passive" check_interval = 2h assign where host.vars.os == "Debian" } apply Service "passive-apt" { import "generic-service" check_command = "passive" check_interval = 2h assign where host.vars.os == "Debian" }

I’m using a relaxed check interval of 2 hours because I will submit results from a cron job that is run every hour. The next step is to setup the machines to submit the results. Create a /etc/cron.d/icinga-pusher with the content below. Note that % characters needs to be escaped in crontab files. I’m running this as the munin user which is a non-privileged account that exists on all of my machines, but you may want to modify this. The check_disk command comes from the monitoring-plugins-basic Debian package, which includes other useful plugins like check_apt that I recommend.

30 * * * * munin /usr/local/bin/icinga-pusher `hostname -f` passive-apt /usr/lib/nagios/plugins/check_apt 40 * * * * munin /usr/local/bin/icinga-pusher `hostname -f` passive-disk "/usr/lib/nagios/plugins/check_disk -w 20\% -c 5\% -X tmpfs -X devtmpfs"

My icinga-pusher script requires a configuration file with some information about the Icinga setup. Put the following content in /etc/default/icinga-pusher (again replacing blahonga with your password):

ICINGA_PUSHER_CREDS="-u pusher:blahonga" ICINGA_PUSHER_URL="https://icinga.yoursite.com:5665" ICINGA_PUSHER_CA="-k"

The parameters above are used by the icinga-pusher script. The ICINGA_PUSHER_CREDS contain the api user credentials, either a simple "-u user:password" combination or it could be "--cert /etc/ssl/yourclient.crt --key /etc/ssl/yourclient.key". The ICINGA_PUSHER_URL is the base URL of your Icinga setup, for the API port which is usually 5665. The ICINGA_PUSHER_CA is "--cacert /etc/ssl/icingaca.crt" or "-k" to not use any CA verification (not recommended!).

Below is the script icinga-pusher itself. Some error handling has been removed for brevity — I have put the script in a separate “icinga-pusher” git repository which will be where I make any updates to this project in the future.

#!/bin/sh # Copyright (C) 2019 Simon Josefsson. # Released under the GPLv3+ license. . /etc/default/icinga-pusher HOST="$1" SERVICE="$2" CMD="$3" OUT=$($CMD) RC=$? oIFS="$IFS" IFS='|' set -- $OUT IFS="$oIFS" OUTPUT="$1" PERFORMANCE="$2" data='{ "type": "Service", "filter": "host.name==\"'$HOST'\" && service.name==\"'$SERVICE'\"", "exit_status": '$RC', "plugin_output": "'$OUTPUT'", "performance_data": "'$PERFORMANCE'" }' curl $ICINGA_PUSHER_CA $ICINGA_PUSHER_CREDS \ -s -H 'Accept: application/json' -X POST \ "$ICINGA_PUSHER_URL/v1/actions/process-check-result" \ -d "$data" exit 0

What do you think? Is there a simpler way of achieving what I want? Thanks for reading.

Thomas Lange: FAI.me service now support kernel cmdline options

Monday 16th of December 2019 04:34:36 PM

The FAI.me service for creating customized installation and cloud images now supports additional kernel cmdline parameters. After toggling to the advanced settings, you can add your options. These will replace the default grub "quiet" option.

This feature is currently only available for the installation images, but not for the cloud images.

The URL of the FAI.me service is

https://fai-project.org/FAIme/

FAI.me

Dirk Eddelbuettel: BH 1.72.0-1 on CRAN

Monday 16th of December 2019 01:03:00 PM

The BH package provides a sizeable portion of the Boost C++ libraries as a set of template headers for use by R. It is quite popular, and frequently used together with Rcpp. The BH CRAN page shows e.g. that it is used by rstan, dplyr as well as a few other packages. The current count of reverse dependencies is at 193.

Boost releases every four months. The last release we packaged was 1.69 from last December, prepared just before CRAN’s winter break. As it needed corresponding changes in three packages using it, it arrived on CRAN early January of this year. The process was much smoother this time. Yesterday I updated the package to the Boost 1.72 release made last Wednesday, and we are on CRAN now as there are no apparent issues. Of course, this BH release was also preceded by a complete reverse-depends check on my end, as well as on CRAN.

As you may know, CRAN tightened policies some more. Pragmas suppressing compiler warnings are verboten so I had to disable a few (see this patch file). Expect compilations of packages using Boost, and BH, to be potentially very noisy. Consider adding flags to your local ~/.R/Makeconf and we should add them to the src/Makevars as much as we can. Collecting a few of these on a BH wiki page may not be a bad idea. Contributions welcome!

One change we now made is to actually start fresh, rather than from the previous release. That way we reflect upstream removals better than before. So even though the upstream source release grew, our release tarball is a little smaller than before. Yay. That is likely a one-off, though, and the file is still Yuge.

As far as regularly scheduled changes go, we responded to three issue tickets and added two more (small) libraries, and also attempted to clean up one (which does not fully disappear due to interdependencies).

A detailed list of our local changes from the NEWS file follows. Two diffs to upstream Boost (for diagnostics, plus another small one for path length and other minor issues) are in the repo as well.

Changes in version 1.72.0-1 (2019-12-15)
  • Upgraded to Boost 1.72.0 (plus the few local tweaks) (#65)

  • Applied the standard minimal patch with required changes, as well as the newer changeset for diagnostics pragma suppression.

  • No longer install filesystem _explicitly_ though some files are carried in (#55)

  • Added mp11 (as requested in #62)

  • Added polygon (as requested in #63)

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Enrico Zini: Environment links

Sunday 15th of December 2019 11:00:00 PM
Earth Temperature Timeline climate chart archive.org 2019-12-16 Methane (or ‘Natural Gas’) – the Other Major Greenhouse Gas — Information is Beautiful climate chart archive.org 2019-12-16 Where does methane or 'natural gas' come from? Is it a greenhouse gas? How much methane do cows produce? All these questions answered, visually. Kessler syndrome - Wikipedia science archive.org 2019-12-16 The Kessler syndrome (also called the Kessler effect,[1][2] collisional cascading, or ablation cascade), proposed by the NASA scientist Donald J. Kessler in 1978, is a scenario in which the density of objects in low Earth orbit (LEO) is high enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions.[3] One implication is that the distribution of debris in orbit could render space activities and the use of satellites in specific orbital ranges difficult for many generations.[3] Stuff in Space science archive.org 2019-12-16 Stuff in Space is a realtime 3D map of objects in Earth orbit, visualized using WebGL.

Giovanni Mascellani: Debian init systems GR

Sunday 15th of December 2019 02:30:00 PM

This is my vote:

-=-=-=-=-=- Don't Delete Anything Between These Lines =-=-=-=-=-=-=-=- 7b77e0f2-4ff9-4adb-85e4-af249191f27a [ 1 ] Choice 5: H: Support portability, without blocking progress [ 2 ] Choice 4: D: Support non-systemd systems, without blocking progress [ 3 ] Choice 7: G: Support portability and multiple implementations [ 4 ] Choice 3: A: Support for multiple init systems is Important [ 5 ] Choice 2: B: Systemd but we support exploring alternatives [ 6 ] Choice 1: F: Focus on systemd [ 7 ] Choice 8: Further Discussion [ 8 ] Choice 6: E: Support for multiple init systems is Required -=-=-=-=-=- Don't Delete Anything Between These Lines =-=-=-=-=-=-=-=---

I don't think that nowadays the choice of the init system can neutral: like the choice of a kernel, a libc and some other core components, you cannot just pretend that everything can be transparently swapped with anything else, therefore Debian rightly has to choose a default thing (Linux, glibc, the GNU userland, and now systemd) which is the standard proposal to the casual user. Systemd has clearly become the mainstream thing for a lot of good reasons, and it is the choice the most Debian users and developers should adopt (this is not to say that systemd, its development mode or its goals are perfect; but we have to choose between alternatives that exist, not between ideal ones). This is the reason why imposing that any init system should be supported at the same level is silly to me, and therefore why option E is the last one, and the only one below Further Discussion; in much the same way as it would be silly to impose that kFreeBSD, Mach and musl should be supported at the same level of their default counterparts (although I would be pretty excited to see that happening!). We cannot expect any Debian contributor to have the resources to contribute scripts/units for any init system randomly appearing.

At the same time, I like Debian to be Universal in the sense of potentially being home for any reasonable piece of software. Diversity, user choice and portability are still values, although not absolute ones, and nobody in Debian should make it impossible to pursue them. I also share the "glue" vision proposed by the principles of choices G and H. Therefore work extending support for non-default init systems (and, again, kernels, libc's, architectures, any core component) should be loyally accepted by all involved contributors, even if they do not see the need for such components.

For these reasons I consider option H the one that, despite being possibly a bit too verbose, spells out my thoughts. All the other options are essentially ordered in how close I percieve them to option H.

I'd like to thanks Ian Jackson for having proposed option H and for having collected a few important criteria to find a way in the jungle of all the available options.

More in Tux Machines

Ubuntu 20.04 LTS Beta is Available. Download Now.

The beta release of Ubuntu 20.04 LTS is here and it is available for download immediately. The final release is planned on Apr 23, 2020, and this beta release gives early adopters, testers a quick preview on what to expect on the final product. Read more

The cataloging of free software

The Free Software Directory is a collaborative catalog of software aimed to be the primary source for representing all free software. Each free program has its own page in the Directory from which it is possible to study the evolution it has undergone in both technological and legal terms through a chronological system similar to that of Wikipedia. Each catalogued program is distinguished by one or more aliases, and accompanied by a huge amount of information, which goes beyond the pure needs of the end user. Snapshots of the graphic interface, detailed descriptions, change logs, links to social pages, and lists of licenses and dependencies are examples of all the useful information which can be carefully attached by users to each page. Everyone can freely subscribe to the Directory and create new pages, but only the pages reviewed and approved by administrators become visible and indexable. Administrative approvals are always made according to strict rules aimed at preventing the spread of proprietary content. As on Wikipedia, each user can have a self-approved personal page, where they can define their identity and discuss with other users. Users can also include sub-pages on which to publish their thematic articles, and any tools useful for the daily life of the Directory. User access rights are assigned to active users, and all those who demonstrate that they have the necessary technical skills and wish to devote themselves daily to the care of the pages have a chance to be welcomed onto the staff. This serene and flexible organization, based on bonds of trust built on facts and adherence to well-defined common ideals, guarantees that the technological and social development produced by the project is gradual but unstoppable. Thus, any investment of time by volunteers is amply repaid. The project has proved to be a clear success, so much that over the years it has received funding from UNESCO, and is still supported by the Free Software Foundation. The portal boasts the participation of more than 3,000 users from all over the world. Since its creation, it has accumulated more than 80,000 verified and recorded revisions for posterity in the chronology of the MediaWiki pages, all of which are dedicated to facilitating the essential freedoms in more than 16,000 free programs. The portal's ability to adapt and survive was possible not only because of the technical creativity of the staff, but also by the solid ideal at its base. By guaranteeing maximum visibility to free software, it has thus rewarded developers who freely employ their knowledge for the good of humanity. The transition to free licenses is indeed a moral duty of every developer, and the Free Software Directory is deployed at the forefront to facilitate it with great benefit to the world's cultural heritage. Read more

Software: Remote Working, Cockpit, YouTube Tools and Sparky Upgrade

  • FSFE Supporters write about Free Software for remote working

    Due to the ongoing Covid-19 virus outbreak many employees - voluntarily or mandatory - are working remotely now. Many organisations who have not been used to remote working so far now face a number of difficulties adapting to the situation. To avoid potential lock-ins, some FSFE supporters collectively wrote about the good reasons to use Free Software for remote working and collected a detailed list of practical solutions in our wiki. Because of the ongoing Covid-19 virus outbreak many organisations who never previously directed any strategic thought towards the available solutions for remote working in their business now opt for a quick solution and choose to follow the - in the beginning often free of charge - offerings from big tech companies and their proprietary solutions. However, such proprietary solutions lock-in these organisations in the future. Choosing a Free Software solution instead means to opt for a solution that has a future, where your organization no longer depends on a particular vendor or file format or whichever other means those vendors choose to lock you in. Free Software puts you in control.

  • Cockpit 216

    Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 216.

  • Excellent Console-Based YouTube Tools

    YouTube is a video-sharing website, created in February 2005, and purchased by Google in November 2006. The web service lets billions of people find, watch, and share originally-created videos. This service lets you watch a wide variety of user-generated and corporate media video. It also offers a forum for people to communicate with others around the world, and acts as a distribution platform. Mainstream media corporations such as CBS, Vevo, Hulu and the BBC publish some of their catalog via YouTube, as part of the YouTube partnership program. Although some parents might disagree, YouTube is one of the shining lights of the internet. According to a survey of 1,500 American teenagers commissioned by Variety, the top five most influential celebrities are YouTube stars, with mainstream celebs eclipsed. Moreover, there are many thousands of “YouTube celebs” who have spun a full-time career of creating videos. This new wave of young ‘YouTubers’ threaten mainstream entertainment with their direct video blogs and interaction with their millions of mostly teenage devotees.

  • Sparky Upgrade text tool

    There is a tool available for Sparkers, which lets you make full system upgrade in a text mode via just one command: Sparky Upgrade.

New Screencasts: Ubuntu 20.04 Beta, Kubuntu 20.04 Beta and Nitrux 1.2.7