Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE -
Updated: 2 hours 11 min ago

Hello new Konsole

Wednesday 5th of June 2019 12:00:00 AM

Konsole has been ready for many many years, and got almost 10 years without anything really exciting being added, mostly because the software is ready, why should we modify something that works to add experimental features?

But then the reality kicked in, konsole missed some features that are quite nice and existed in Terminator, Tilix and other new terminals, but Tilix now lacks a developer, and Terminator is also not being actively developed (with the last release being it in 26 of February of 2017)

Konsole is tougth as a powerhouse of the terminal emulators, having too many things that are quite specific for some use cases, and it’s really tricky to remove a feture (even when it’s half-broken) because some people do rely in that behavior, so my first idea is to modernize the codebase so I can actually start to code.

It’s been around one year that I send patches regularly to Konsole now, and I don’t plan to stop.

Sirgienko Nikita (sirgienko)

Tuesday 4th of June 2019 10:52:35 PM
Hello everyone! I'm participating in Google Summer of Code 2019, I am working on KDE Cantor project. The GSoC project is mentored by Alexander Semke - one of the core developers of LabPlot, Knights and Cantor. At first, let me introduce you into Cantor and into my GSoC-project:
Cantor is a KDE application providing a graphical interface to different open-source computer algebra systems and programming languages, like Octave, Maxima, Julia, Python etc. The main idea of this application is to provide one single, common and user-friendly interface for different systems instead of providing different GUIs for different systems. The details specific to the different languages are transparent to the end-user and are handled internally in the language specific parts of Cantor's code.
There is another project following this idea - the project Jupyter. As a result of its very big popularity, user base and the community around this project, there is a lot of content available for this project created and contributed by users from different scientific and educational areas, as documented in the gallery of interesting Jupyter Notebooks.
At the moment, Cantor has its own format for projects. Though this format is good enough to manage Cantor projects, there is not a lot of content created and published by Cantor users and the user base is still not at the level which this application would deserve. Furthermore, sharing of the content stored in Cantor's native format requires the availability of Cantor on the target system, which is available for linux only at the moment. This all complicates the attempts to make Cantor more popular and known to a broader user base. Adding the possibility to import/export Jupyter Notebook worksheets in Cantor will address the problems described above.If you are interested in a more the technical and detailed description of the project, you can check out my proposal.

Actually, it's not my first contribution to Cantor. I am contributing to this project for roughly one year already. As a developer interested in C++, Qt and applications relevant for scientific purposes, I started to contribute to Cantor last year by working on smaller bug fixes first. With time and with more understanding about the overall architecture of Cantor I could work on bigger topics like new features, more complicated bug fixes and refactorings in the code and this year I'm happy to contribute yet another big and very important functionality to Cantor as part of GSoC.

To start I selected couple of well structured Jupyter notebooks from a gallery of interesting Jupyter Notebooks. Those notebooks were selected based on three criteria:

  • they should be self-sufficient
  • they should contain commands and results of different types
  • they should have a reasonable size sufficient for testing the new code and for demoing the results
Below you can see the screenshots of the notebooks I decided to use:

The notebooks will be used for testing functionality and also for showing a progress of this project and in the final post I will summarize and report on Cantor being able to successfully process such files.

In the next post I plan to already show a working first version of the Jupyter importer.

Building and testing on multiple platforms – introducing minicoin

Tuesday 4th of June 2019 01:30:16 PM

When working on Qt, we need to write code that builds and runs on multiple platforms, with various compiler versions and platform SDKs, all the time. Building code, running tests, reproducing reported bugs, or testing packages is at best cumbersome and time consuming without easy access to the various machines locally. Keeping actual hardware around is an option that doesn’t scale particularly well. Maintaining a bunch of virtual machines is often a better option – but we still need to set those machines up, and find an efficient way to build and run our local code on them.

Building my local Qt 5 clone on different platforms to see if my latest local changes work (or at least compile) should be as simple as running “make”, perhaps with a few more options needed. Something like

qt5 $ minicoin run windows10 macos1014 ubuntu1804 build-qt

should bring up three machines, configure them using the same steps that we ask Qt developers to follow when they set up their local machines (or that we use in our CI system Coin – hence the name), and then run the build job for the code in the local directory.

This (and a few other things) is possible now with minicoin. We can define virtual machines in code that we can share with each other like any other piece of source code. Setting up a well-defined virtual machine within which we can build our code takes just a few minutes.

minicoin is a set of scripts and conventions on top of Vagrant, with the goal to make building and testing cross-platform code easy. It is now available under the MIT license at

A small detour through engineering of large-scale and distributed systems

While working with large-scale (thousands of hosts), distributed (globally) systems, one of my favourite, albeit somewhat gruesome, metaphors was that of “servers as cattle” vs “servers as pets”. Pet-servers are those we groom manually, we keep them alive, and we give them nice names by which to remember and call (ie ssh into) them. However, once you are dealing with hundreds of machines, manually managing their configuration is no longer an option. And once you have thousands of machines, something will break all the time, and you need to be able to provision new machines quickly, and automatically, without having to manually follow a list of complicated instructions.

When working with such systems, we use configuration management systems such as CFEngine, Chef, Puppet, or Ansible, to automate the provisioning and configuration of machines. When working in the cloud, the entire machine definition becomes “infrastructure as code”. With these tools, servers become cattle which – so the rather unvegetarian idea – is simply “taken behind the barn and shot” when it doesn’t behave like it should. We can simply bring a new machine, or an entire environment, up by running the code that defines it. We can use the same code to bring production, development, and testing environments up, and we can look at the code to see exactly what the differences between those environments are. The tooling in this space is fairly complex, but even so there is little focus on developers writing native code targeting multiple platforms.

For us as developers, the machine we write our code on is most likely a pet. Our primary workstation dying is the stuff for nightmares, and setting up a new machine will probably keep us busy for many days. But this amount of love and care is perhaps not required for those machines that we only need for checking whether our code builds and runs correctly. We don’t need our test machines to be around for a long time, and we want to know exactly how they are set up so that we can compare things. Applying the concepts from cloud computing and systems engineering to this problem lead me (back) to Vagrant, which is a popular tool to manage virtual machines locally and to share development environments.

Vagrant basics

Vagrant gives us all the mechanisms to define and manage virtual machines. It knows how to talk to a local hypervisor (such as VirtualBox or VMware) to manage the life-cycle of a machine, and how to apply machine-specific configurations. Vagrant is written in Ruby, and the way to define a virtual machine is to write a Vagrantfile, using Ruby code in a pseudo-declarative way:

Vagrant.configure("2") do |config| = "generic/ubuntu1804"     config.vm.provision "shell",         inline: "echo Hello, World!" end

Running “vagrant up” in a directory with that Vagrantfile will launch a new machine based on Ubuntu 18.04 (downloading the machine image from the vagrantcloud first), and then run “echo Hello, World!” within that machine. Once the machine is up, you can ssh into it and mess it up; when done, just kill it with “vagrant destroy”, leaving no traces.

For provisioning, Vagrant can run scripts on the guest, execute configuration management tools to apply policies and run playbooks, upload files, build and run docker containers, etc. Other configurations, such as network, file sharing, or machine parameters such as RAM, can be defined as well, in a more or less hypervisor-independent format. A single Vagrantfile can define multiple machines, and each machine can be based on a different OS image.

However, Vagrant works on a fairly low level and each platform requires different provisioning steps, which makes it cumbersome and repetitive to do essentially the same thing in several different ways. Also, each guest OS has slightly different behaviours (for instance, where uploaded files end up, or where shared folders are located). Some OS’es don’t fully support all the capabilities (hello macOS), and of course running actual tasks is done different on each OS. Finally, Vagrant assumes that the current working directory is where the Vagrantfile lives, which is not practical for developing native code.

minicoin status

minicoin provides various abstractions that try to hide many of the various platform specific details, works around some of the guest OS limitations, and makes the definition of virtual machines fully declarative (using a YAML file; I’m by no means the first one with that idea, so shout-out to Scott Lowe). It defines a structure for providing standard provisioning steps (which I call “roles”) for configuring machines, and for jobs that can be executed on a machine. I hope the documentation gets you going, and I’d definitely like to hear your feedback. Implementing roles and jobs to support multiple platforms and distributions is sometimes just as complicated as writing cross-platform C++ code, but it’s still a bit less complex than hacking on Qt.

We can’t give access to our ready-made machine images for Windows and macOS, but there are some scripts in “basebox” that I collected while setting up the various base boxes, and I’m happy to share my experiences if you want to set up your own (it’s mostly about following the general Vagrant instructions about how to set up base boxes).

Of course, this is far from done. Building Qt and Qt applications with the various compilers and toolchains works quite well, and saves me a fair bit of time when touching platform specific code. However, working within the machines is still somewhat clunky, but it should become easier with more jobs defined. On the provisioning side, there is still a fair bit of work to be done before we can run our auto-tests reliably within a minicoin machine. I’ve experimented with different ways of setting up the build environments; from a simple shell script to install things, to “insert CD with installed software”, and using docker images (for example for setting up a box that builds a web-assembly, using Maurice’s excellent work with Using Docker to test WebAssembly).

Given the amount of discussions we have on the mailing list about “how to build things” (including documentation, where my journey into this rabbit hole started), perhaps this provides a mechanism for us to share our environments with each other. Ultimately, I’d like coin and minicoin to converge, at least for the definition of the environments – there are already “coin nodes” defined as boxes, but I’m not sure if this is the right approach. In the end, anyone that wants to work with or contribute to Qt should be able to build and run their code in a way that is fairly close to how the CI system does things.

The post Building and testing on multiple platforms – introducing minicoin appeared first on Qt Blog.

Federated conference videos

Tuesday 4th of June 2019 06:03:30 AM

So, foss-north 2019 happened. 260 visitors. 33 speakers. Four days of madness.

During my opening of the second day I mentioned some social media statistics. Only 7 of our speakers had mastodon accounts, but 30 had twitter accounts.

Day two of #fossnorth2019 is starting! @e8johan is giving a quick opening before the key notes.

— (((Niclas Zeising))) (@niclaszeising) April 9, 2019

Given the current situation with an ongoing centralization of services to a few large provides, I feel that the Internet is moving in the wrong direction. The issue is that without these central repository-like services, it is hard to find contents. For instance, twitter would be boring if you had noone one to tweet to.

That is where federated services enter the picture. Here you get the best of two worlds. For instance mastodon. This is a federated micro blogging network. This means that everyone can host their own instance (or simply join one), and all instances together create the larger mastodon society. It even has the benefit of creating something of a micro-community at each server instance – so you can interact with the larger world (all federated mastodon servers), and your local community (the users of your instance).

There are multiple federated solutions out there. Everything from nextcloud, WordPress, pixelfed, matrix, to peertube. The last one, peertube, is a federated alternative to YouTube. It has similar benefits to mastodon, so you have the larger federated space, but also your local community. Discussing the foss-north videos with Reg over Mastodon, we realized that there is a gap for a peertube instance for conference videos.

Sorry for the Swedish. We basically agree that a peertube instance for conference videos is a great idea.

I really hate to say that something should happen, and not having the time to do something about it. There are so many things that really should happen. Luckily, Reg reached out to me and said – what about me setting it up.

Said and done, he went to spacebear and created an instance of peertube. Got the domain I started migrating videos and tested it out. You can try it yourself. For instance, here is an embedded video from the lightning talks:

If you help organizing a conference and want to use federated video hosting, contact Reg to get an account at If you’re interested in free and open source, drop in at and check out the videos.

I was at the Libre Graphics Meeting 2019

Monday 3rd of June 2019 09:35:35 AM

I had a nice surprise last Monday, I learned that the city where I live Saarbrücken (Germany) is hosting the 2019 edition of the nice Libre Graphics Meeting (lgm). So I took the opportunity to attend my first FOSS event. The event took place at the Hochschule der Bildenden Künste Saar from the Wed 29.05 to Sunday 02.06.

I really enjoyed, I meet a lot of other Free Software contributors (not only devs), and discovered some nice programming and artistic projects.

There were some really impressive presentations and workshops.

Thursday 30.05.

GEGL (GIMP new ‘rendering engine’) maintainer Øyvind Kolås presented, how to use GEGL effect from the command line and the same commands can be used directly from GIMP. This is helpful, when we want to automate some workflow.

In the afternoon, I discovered PraxisLIVE, an awesome live coding IDE where you can create effect with Java and a graph editor and showing the effect instantly on for example a webcam stream or a music track.

Ana Isabel Carvalho and Ricardo Lafuente explained their past workshop in Porto where the participants create pixel art fonts with git and the gitlab-ci.

Friday, 31.05.

On Friday, I took part to two workshops. The first was GIMP, there I met a lot of GIMP/GEGL developers. But it was more development meeting than a workshop where I could get my hand dirty.

I also took part in the Inkscape workshop, where I learned about all of the nice features coming in Inkscape 1.0 (a new alpha version was released during the LGM 2019 and users are encouraged to reports bugs and regressions). I also learned that Inkscape can be used to create nice wood works:

The model is published in the Thingiverse under CC BY-NC-SA 3.0.

After this productive day, most of the LGM participants went to the ‘Kneipentour’ (bar-hopping) and enjoyed some good Zwickel (the local beer).

Saturday, 01.06.

After last night, it was a bit difficult to get up, but I was able be only one minute late to see Boudewijn Rempt talk “HDR Support in Krita”.

In the afternoon, I took part in the Paged.js workshop, where we were able to create a book layout with CSS and HTML. Paged.js could be interesting for generating nice KDE handbooks with a professional looking feel, because it’s only using web standards (not implemented in any web browsers), and we could generate the pdf from the already existing html version.

Sunday, 02.06.

Sunday I took part in the Blender workshop, and Julian Eisel did an excellent job explaining the internal of how “Blender DNA and RNA system” archives great backward compatibility for .blend files and make it painless to write UI in Python almost directly connected to the DNA.


In summarize, LGM was a great event, I really enjoyed it and I hope I will be able to attend the next edition in Rennes (France) and see all these nice people again.

Oh, and now I have now more stickers on my laptop.

You can comment to this post in Mastodon.

Many thanks to Drew DeVault for proofreading this blog post.

Hello GSoC

Sunday 2nd of June 2019 12:48:31 PM

This blog post recaps the last year, and how I ended up getting selected in GSoC’2019 as a student with an awesome project in KDE. Read more on the blog post here .

KDE Usability & Productivity: Week 73

Sunday 2nd of June 2019 04:53:32 AM

Week 73 in Usability & Productivity initiative is here! We have all sorts of cool stuff to announce, and eagle-eyed readers will see bits and pieces of Plasma 5.16’s new wallpaper “Ice Cold” in the background!

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

April/May in KDE Itinerary

Saturday 1st of June 2019 08:00:00 AM

A lot has happened again around KDE Itinerary since the last two month summary. A particular focus area at the moment is achieving “Akademy readiness”, that is being able to properly support trips to KDE Akademy in Milan early September, understanding the tickets of the Italian national railway is a first step into that direction.

New Features

The timeline view in KDE Itinerary now highlights the current element(s), to make it easier to find the relevant information. Active elements also got an “are we there yet?” indicator, a small bar showing the progress of the current leg of a trip, taking live data into account.

Trip element being highlighted and showing a progress indicator.

Another clearly visible addition can be found in the trip group summary elements in the timeline. Besides expanding or collapsing the trip, these elements now also show information concerning the entire trip when available, such as the weather forecast, or any power plug incompatibility you might encounter during the trip.

Trip group summary showing weather forecast for the entire trip. Trip group summary showing power plug compatibility warnings.

Less visible but much more relevant for “Akademy readiness” was adding support for Trenitalia tickets. That required some changes and additions to how we deal with barcodes, as well as an (ongoing) effort to decode the undocumented binary codes used on those tickets. More details can be found in a recent post on this subject.

Infrastructure Work

A lot has also happened behind the scenes:

  • The ongoing effort to promote KContacts and KCalCore yields some improvements we benefit from directly as well, such as the Unicode diacritic normalization applied during the country name detection in KContacts (reducing the database size and detecting country names also with slight spelling variations) or the refactoring of KCalCore::Person and KCalCore::Attendee (which will make those types easily accessible by extractor scripts).
  • The train reservation data model now contains the booked class, which is a particular useful information when you don’t have a seat reservation and need to pick the right compartment.
  • The RCT2 extractor (relevant e.g. for DSB, NS, ÖBB, SBB) got support for more variations of seat reservations and more importantly now preserves the ticket token for ticket validation with the mobile app.
  • The train station knowledge database is now also indexed by UIC station codes, which became necessary to support the Trenitalia tickets.
  • Extractor scripts got a new utility class for dealing with unaligned binary data in barcodes.

We also finally found the so far illusive mapping table for the station identifiers used in SNCF barcodes, provided by Trainline as Open Data. This yet has to find its way into Wikidata though, together with more UIC station codes for train stations in Italy. Help welcome :)

Performance Optimizations

Keeping an eye on performance while the system becomes more complex is always a good idea, and a few things have been addressed in this area too:

  • The barcode decoder so far was exposed more or less directly to the data extractors, resulting in possibly performing the expensive decoding work twice on the same document, e.g. when both the generic extractor and one or more custom extractors processed a PDF document. Additionally, each of those were applying their own heuristics and optimizations to avoid expensive decoding attempts where they are unlikely to succeed. Those optimizations now all moved to the barcode decoder directly, together with a positive and negative decoding result cache. That simplifies the code using this, and it speeds up extraction of PDF documents without a context (such as a sender address) by about 15%.
  • Kirigami’s theme color change compression got further optimized, which in the case of KDE Itinerary avoids the creation of a few hundred QTimer objects.
  • The compiled-in knowledge database got a more space-efficient structure for storing unaligned numeric values, cutting down the size of the 24bit wide IBNR and UIC station code indexes by 25%.
Fixes & Improvements

There’s plenty of smaller changes that are noteworthy too of course:

  • We fixed a corner case in KF5::Prison’s Aztec encoder that can trigger UIC 918.3 tickets, producing invalid barcodes.
  • The data extractors for Brussels Airlines, Deutsche Bahn and SNCF got fixes for various booking variants and corner cases.
  • Network coverage for KPublicTransport increased, including operators in Ireland, Poland, Sweden, parts of Australia and more areas in Germany.
  • More usage of emoji icons in KDE Itinerary got replaced by “real” icons, which fixes rendering glitches on Android and produces a more consistent look there.
  • Lock inhibition during barcode scanning now also works on Linux.
  • PkPass files are now correctly detected on Android again when opened as a content: URL.
  • The current trip group in the KDE Itinerary timeline is now always expanded, which fixes various confusions in the app when “now” or “today” don’t exist due to being in a collapsed time range.
  • Multi-day event reservations are now split in begin and end elements in the timeline as already done for hotel bookings.
  • Rental car bookings with a drop-off location different from the pick-up location are now treated as location changes in the timeline, which is relevant e.g. for the weather forecasts.
  • Extracting times from PkPass boarding passes now converts those to the correct timezone.

A big thanks to everyone who donated test data again, this continues to be essential for improving the data extraction.

If you want to help in other ways than donating test samples too, see our Phabricator workboard for what’s on the todo list, for coordinating work and for collecting ideas. For questions and suggestions, please feel free to join us on the KDE PIM mailing list or in the #kontact channel on Matrix or Freenode.

Organizing time on Plasma Mobile

Friday 31st of May 2019 06:00:00 AM

About a year ago the phabricator tasks of Plasma Mobile were extensively revamped. We tried to make clear the objective of each task, providing helpful resources and facilitating onboarding. Looking at the features needed to reach the “Plasma Mobile 1.0” milestone, the calendar application was sticking out. So, Calindori was born (even though this name was coined some months later).

Build on top of Qt Quick and Kirigami and following -or trying to follow- the KDE human interface guidelines, the whole point of Calindori is to help users manage their time. Through a clean user interface, it aims to offer the users an intuitive way to accomplish their tasks.

Calindori home and events pages

For the time being, Calindori provides the basic calendar functionalities: you can check previous and future dates and manage events and todos. Tasks and events of different domains (e.g. personal, business) can be included in different calendars, since multiple calendars are supported. Import functionality has also been added so as to make the transition of new users easier.

You may test Calindori at the moment, either building the 1.0 release from source or just installing the flatpak bundle. A Plasma Mobile phone is not a requirement for testing Calindori; it perfectly runs on Plasma or any other Linux desktop environment. It has been designed having in mind the needs of the mobile users, but the great advantage of using a framework like Kirigami is the adaptation of the user interface to desktop environments, with little or without additional development effort. It will also be very helpful to report issues and provide feedback on the gitlab repository.

Behind the scenes, the iCalendar standard is followed, as implemented by the KDE KCalcore library. KDE/Qt Developers may also find interesting the date and time pickers included in the application. These components may be reviewed, enhanced and, why not, find their way to the KDE frameworks.

Calindori date and time pickers

Looking to the future, the support of repeating events and reminders are the first tasks that should be handled. Then, caldav should also be supported, enabling the users to synchronize their online calendars (e.g. Nextcloud). Nevertheless, that’s a task that should be discussed with the rest of the Plasma Mobile team. The next KDE Akademy at Milan in September will be a great opportunity.

The Plasma Mobile team has made a significant progress during the last months, trying hard to offer a KDE Plasma experience to the mobile users. Even though it is not an 100% complete mobile platform, the missing parts are slowly being added to the software stack. But a lot of interesting tasks are still available, waiting for people willing to help the Plasma Mobile ecosystem grow.

In the next months a set of devices -like PinePhone and Librem 5- are expected, enabling the execution of Linux distributions on real mobile hardware. This will change significantly the free software mobile environment; it will open the way for more privacy friendly devices that could put the users in the driver’s seat, without spying on them or treating them like products. I believe it is the perfect time to get involved with projects like Plasma Mobile, helping to create an open mobile platform that will bring a KDE breeze to mobile phones.

KStars v3.2.3 is Released!

Thursday 30th of May 2019 09:38:16 PM
Another minor release of the 3.2.X series is released, KStars v3.2.3 is out for Windows/Mac/Linux. This would probably the last minor release of the 3.2.X series with 3.3.0 coming into development now.

This release contains a few minor bug fixes and some few convenient changes that were requested by our users.

The Sky Map cursor can now be configured. The default X icon can now be changed to either the arrow or circle mouse cursors. It can be changed by going to Configure KStars --> Advanced --> Look & Feel.

It is also possible now to make left click immediately snaps to the object under the cursor. The default behavior is to double-click on object to focus it, but if Left Click Selects Object is checked, then any left clicks would snap to the object right away.
Another minor change was to include the profile name directly into the Window title for Ekos to make it easy to remember which profile is currently running, in addition to a few icon changes. 

A race condition when using a guide camera was fixed thanks to Bug #407952 filed by Kevin Ross. On the other hand, Wolfgang Reissenberger improved the estimated time of scheduler jobs where there are repeated jobs in the sequence.

Using std::unique_ptr with Qt

Thursday 30th of May 2019 07:29:56 PM
Qt memory handling

Qt has a well established way to handle memory management: any QObject-based instance can be made a child of another QObject instance. When the parent instance is deleted it deletes all its children. Simple and efficient.

When a Qt method takes a QObject pointer one can rely on the documentation to know if the function takes ownership of the pointer. Same thing for functions returning a QObject pointer.

What the rest of the world does

This is very Qt specific. In the rest of the C++ world, object ownership is more often managed through smart pointers like std::unique_ptr and std::shared_ptr.

I am used to the Qt way, but I like the harder to misuse and self documenting aspect of the unique_ptr way. Look at this simple function:

Engine* createEngine();

With this signature we have no way to know if you are expected to delete the new engine. And the compiler cannot help us. This code builds:

{ Engine* engine = createEngine(); engine->start(); // Memory leak! }

With this signature, on the other hand:

std::unique_ptr<Engine> createEngine();

It is clear and hard to ignore that ownership is passed to the caller. This won't build:

{ Engine* engine = createEngine(); engine->start(); }

But this builds and does not leak:

{ std::unique_ptr<Engine> engine = createEngine(); engine->start(); // No leak, Engine instance is deleted when going out of scope }

(And we can use auto to replace the lengthy std::unique_ptr<Engine> declaration)

We can also use this for "sink functions": declaring a function argument as std::unique_ptr makes it unambiguous that the function takes ownership of it.

Using std::unique_ptr for member variables brings similar benefits, but one point I really like is how it makes the class definition more self-documenting. Consider this class:

class Car { /*...*/ World* mWorld; Engine* mEngine; }

Does Car owns mWorld, mEngine, both, none? We can guess, but we can't really know. Only the class documentation or our knowledge of the code base could tell us that Car owns mEngine but does not own mWorld.

On the other hand, if we work on a code base where all owned objects are std::unique_ptr and all "borrowed" objects are raw pointers, then this class would be declared like this:

class Car { /*...*/ World* mWorld; std::unique_ptr<Engine> mEngine; }

This is more expressive.

Forward declarations

We need to be careful with forward declarations. The following code won't build:


#include <memory> class Engine; class Car { public: Car(): private: std::unique_ptr<Engine> mEngine; };


#include "Car.h" #include "Engine.h" Car::Car() : mEngine(std::make_unique<Engine>()) { }


#include "Car.h" int main() { Car car; return 0; }

The compiler fails to build "main.cpp". It complains it cannot delete mEngine because the Engine class is incomplete. This happens because we have not declared a destructor in Car, so the compiler tries to generate one when building "main.cpp", and since "main.cpp" does not include "Engine.h", the Engine class is unknown there.

To solve this we need to declare Car destructor and tell the compiler to generate its implementation in Car implementation:


#include <memory> class Engine; class Car { Car(); ~Car(); // <- Destructor declaration private: std::unique_ptr<Engine> mEngine; };


#include "Car.h" #include "Engine.h" Car::Car() : mEngine(std::make_unique<Engine>()) { } Car::~Car() = default; // <- Destructor "definition" Using std::unique_ptr with Qt code

I wanted to experiment with using unique_ptr instead of the Qt parent-child system in a real project, so I decided to do so on Lovi, a Qt-based log file viewer I am working on. It works out well, but there are a few pitfalls to be aware of.

Double deletions

If your class owns a QObject through a unique_ptr, be careful that the QObject parent is not deleted before your class, as it will delete your QObject, so when you reach your class destructor, unique_ptr will try to delete the already deleted QObject.

This also happens if you use a unique_ptr for a QDialog with the Qt::WA_DeleteOnClose attribute set.

Get used to calling .get()

Another change compared to using raw pointers is that every time you pass the object to a method which takes a raw pointer, you need to call .get(). So for example connecting the Engine::started() signal to our Car instance would be done like this:

connect(mEngine.get(), &Engine::started, this, &Car::onEngineStarted);

This is a bit annoying but again, it makes it explicit that you are "lending" your object to another function.

What about QScopedPointer?

Qt comes with QScopedPointer, which is very similar to std::unique_ptr. You might already be using it. Why should you use unique_ptr instead?

The main difference between these two is that QScopedPointer lacks move semantics. It makes sense since it has been created to work with C++98, which does not have move semantics.

This means it is more cumbersome to implement sink functions with it. Here is an example.

Suppose we want to create a function to shred a car. The Car instance received by this function should not be usable once it has been called, so we want to make it a sink function, like this:

void shredCar(std:unique_ptr<Car> car) { // Shred that car }

The compiler rightfully prevents us from calling shredCar() like this:

auto car = std::make_unique<Car>(); shredCar(car);

Instead we have to write:

auto car = std::make_unique<Car>(); shredCar(std::move(car));

This makes it explicit that car no longer points to a valid instance. Now lets write shredCar() using QScopedPointer instead:

void shredCar(QScopedPointer<Car> car) { // Shred that car }

Since QScopedPointer does not support move, we can't write:


Instead we have to write this:


which is less readable, and less efficient since we create a new temporary QScopedPointer.

Other than its name being more Qt-like, QScopedPointer has no advantages compared to std::unique_ptr.

Borrowed pointers

Following the guideline of using std::unique_ptr<> for pointers we own means that we should only use raw pointers for "borrowed" objects: objects owned by someone else, which have been passed to us (yes, this sounds very Rust like).

To make things even more explicit, I am contemplating the idea of creating a borrowed_ptr<T> pointer. This pointer would not do anything more than a raw pointer, but it would make it clear that the code using the pointer does not own the object.

Taking our Car example again, the class definition would look like this:

class Car { /*...*/ borrowed_ptr<World> mWorld; std::unique_ptr<Engine> mEngine; }

Such a pointer would make code more readable at the cost of verbosity. It could also be really useful when generating bindings for other languages. What do you think?


I believe using std::unique_ptr can help make your code more readable and robust. Consider using it instead of raw pointers. Not only for local variables, but also for function arguments, return values and member classes.

And if you use QScopedPointer, you can switch to std::unique_ptr, it has a few advantages and no drawbacks.

More in Tux Machines

Android Leftovers

Linux on the mainframe: Then and now

Last week, I introduced you to the origins of the mainframe's origins from a community perspective. Let's continue our journey, picking up at the end of 1999, which is when IBM got onboard with Linux on the mainframe (IBM Z). These patches weren't part of the mainline Linux kernel yet, but they did get Linux running on z/VM (Virtual Machine for IBM Z), for anyone who was interested. Several efforts followed, including the first Linux distro—put together out of Marist College in Poughkeepsie, N.Y., and Think Blue Linux by Millenux in Germany. The first real commercial distribution came from SUSE on October 31, 2000; this is notable in SUSE history because the first edition of what is now known as SUSE Enterprise Linux (SLES) is that S/390 port. Drawing again from Wikipedia, the SUSE Enterprise Linux page explains: Read more

OSS: Cisco Openwashing, GitLab Funding, Amazon Openwashing, Chrome OS Talk and More Talks

  • Why Open Source continues to be the foundation for modern IT

    Open source technology is no longer an outlier in the modern world, it's the foundation for development and collaboration. Sitting at the base of the open source movement is the Linux Foundation, which despite having the name Linux in its title, is about much more than just Linux and today is comprised of multiple foundations, each seeking to advance open source technology and development processes. At the recent Open Source Summit North America event held in San Diego, the width and breadth of open source was discussed ranging from gaming to networking, to the movie business ,to initiatives that can literally help save humanity. "The cool thing is that no matter whether it's networking, Linux kernel projects, the Cloud Native Computing Foundation projects like Kubernetes, or the film industry with the Academy Software Foundation (ASWF), you know open source is really pushing innovation beyond software and into all sorts of different areas," Jim Zemlin, executive director of the Linux Foundation said during his keynote address.

  • GitLab Inhales $268M Series E, Valuation Hits $2.75B

    GitLab raised a substantial $268 million in a Series E funding round that was more than doubled what the firm had raised across all of its previous funding rounds and pushed its valuation to $2.75 billion. It also bolsters the company’s coffers as it battles in an increasingly competitive DevOps space. GitLab CEO Sid Sijbrandij said in an email to SDxCentral that the new Series E funds will help the company continue to move on its goal of providing a single application to support quicker delivery of software. It claims more than 100,000 organizations use its platform. “These funds will help us to keep up with that pace and add to that with our company engineers,” Sijbrandij explained. “We need to make sure every part of GitLab is great and that CIOs and CTOs who supply the tools for their teams know that if they bet on GitLab that we’ll stand up to their expectations.”

  • Amazon open-sources its Topical Chat data set of over 4.7 million words [Ed: openwashing of listening devices without even releasing any code]
  • How Chrome OS works upstream

    Google has a long and interesting history contributing to the upstream Linux kernel. With Chrome OS, Google has tried to learn from some of the mistakes of its past and is now working with the upstream Linux kernel as much as it can. In a session at the 2019 Open Source Summit North America, Google software engineer Doug Anderson detailed how and why Chrome OS developers work upstream. It is an effort intended to help the Linux community as well as Google. The Chrome OS kernel is at the core of Google's Chromebook devices, and is based on a Linux long-term support (LTS) kernel. Anderson explained that Google picks an LTS kernel every year and all devices produced in that year will use the selected kernel. At least once during a device's lifetime, Google expects to be able to "uprev" (switch to a newer kernel version). Anderson emphasized that if Google didn't upstream its own patches from the Chrome OS kernel, it would make the uprev process substantially more difficult. Simply saying that you'll work upstream and actually working upstream can be two different things. The process by which Chrome OS developers get their patches upstream is similar to how any other patches land in the mainline Linux kernel. What is a bit interesting is the organizational structure and process of how Google has tasked Chrome OS developers to work with upstream. Anderson explained that developers need to submit patches to the kernel mailing list and then be a little patient, giving some time for upstream to respond. A key challenge, however, is when there is no response from upstream. "When developing an upstream-first culture, the biggest problem anyone can face is silence," Anderson said. Anderson emphasized that when submitting a patch to the mailing list, what a developer is looking for is some kind of feedback; whether it's good or bad doesn't matter, but it does matter that someone cares enough to review it. What the Chrome OS team does in the event that there is no community review is it will have other Chrome OS engineers publicly review the patch. The risk and worry of having Chrome OS engineers comment on Chrome OS patches is that the whole process might look a little scripted and there could be the perception of some bias as well. Anderson noted that it is important that only honest feedback and review is given for a patch.

  • Open Source Builds Trust & Credibility | Karyl Fowler

    Karyl Fowler is co-founder and CEO of Transmute, a company that’s building open source and decentralized identity management. We sat down with Fowler at the Oracle OpenWorld conference to talk about the work Transmute is doing.

  • What Is Infrastructure As Code?

    Rob Hirschfeld, Founder, and CEO of RackN breaks Infrastructure As Code (IaC) into six core concepts so users have a better understanding of it.

  • Everything You Need To Know About Redis Labs

    At the Oracle OpenWorld conference, we sat down with Kyle Davis – Head of Developer Advocacy at Redis Labs – to better understand what the company does.

Programming: Java, Python, and Perl

  • Oracle Releases Java 13 with Remarkable New Features

    Oracle – the software giant has released Java SE and JDK 13 along with the promise to introduce more new features in the future within the six-month cycle. The Java 13’s binaries are now available for download with improvements in security, performance, stability, and two new additional preview features ‘Switch Expressions’ and ‘Text Blocks’, specifically designed to boost developers’ productivity level. This gives the hope that the battle of Java vs Python will be won by the former. Remarking on the new release, Oracle said: “Oracle JDK 13 increases developer productivity by improving the performance, stability and security of the Java SE Platform and the JDK,”. [...] Speaking of the Java 13 release, it is licensed under the GNU General Public License v2 along with the Classpath Exception (GPLv2+CPE). The director of Oracle’s Java SE Product Management, Sharat Chander stated “Oracle offers Java 13 for enterprises and developers. JDK 13 will receive a minimum of two updates, per the Oracle CPU schedule, before being followed by Oracle JDK 14, which is due out in March 2020, with early access builds already available.” Let’s look into the new features that JDK 13 comes packed with.

  • 8 Python GUI Frameworks For Developers

    Graphical User Interfaces make human-machine interactions easier as well as intuitive. It plays a crucial role as the world is shifting.

  • What's In A Name? Tales Of Python, Perl, And The GIMP

    In the older days of open source software, major projects tended to have their Benevolent Dictators For Life who made all the final decisions, and some mature projects still operate that way. Guido van Rossum famously called his language “Python” because he liked the British comics of the same name. That’s the sort of thing that only a single developer can get away with. However, in these modern times of GitHub, GitLab, and other collaboration platforms, community-driven decision making has become a more and more common phenomenon, shifting software development towards democracy. People begin to think of themselves as “Python programmers” or “GIMP users” and the name of the project fuses irrevocably with their identity. What happens when software projects fork, develop apart, or otherwise change significantly? Obviously, to prevent confusion, they get a new name, and all of those “Perl Monks” need to become “Raku Monks”. Needless to say, what should be a trivial detail — what we’ve all decided to call this pile of ones and zeros or language constructs — can become a big deal. Don’t believe us? Here are the stories of renaming Python, Perl, and the GIMP.

  • How to teach (yourself) computer programming

    Many fellow students are likely in the same boat, the only difference being that the vast majority not only that don’t list computer science as one of their passions (but more as one of their reasons for not wanting to live anymore), but they get a very distorted view of what computer science and programming actually is.

    Said CS classes tend to be kind of a joke, not only because of the curriculum. The main reason why they are bad and boring is the way they are taught. I am going to address my main frustrations on this matter together with proposed solutions and a guide for those who want to start learning alone.

  • [Old] Perl Is Still The Goddess For Text Manipulation

    You heard me. Freedom is the word here with Perl.

    When I’m coding freely at home on my fun data science project, I rely on it to clean up my data.

    In the real world, data is often collected with loads of variations. Unless you are using someone’s “clean” dataset, you better learn to clean that data real fast.

    Yes, Perl is fast. It’s lightening fast.