Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE -
Updated: 11 hours 51 min ago

KDE Usability & Productivity: Week 86

Sunday 1st of September 2019 05:54:32 AM

Here’s week 86 in KDE’s Usability & Productivity initiative! There are lots and lots of cool changes, which is especially impressive as the KDE community prepares for Akademy, which kicks off next weekend. Sadly I cannot attend this year–there was an unavoidable scheduling conflict with my best friend’s wedding–but I will be there in spirit! In the meantime, check out the progress:

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

digiKam Recipes 19.09.01

Sunday 1st of September 2019 12:00:00 AM

After a somewhat prolonged quiet period, a new revision of the digiKam Recipes book is ready. This version brings a refreshed book cover as well as new content: Use Git to maintain multiple digiKam profiles and easily switch between them Back up all digiKam settings, so you don’t have to configure the application from scratch when you reinstall it Get the most out of the tagging functionality by customizing individual tags All digiKam Recipes readers will receive the updated version of the book automatically and free of charge.

Help Beta Test Krita 4.2.6!

Saturday 31st of August 2019 10:00:56 AM

This will be the first Krita release since the big sprint. We’re aiming to do monthly bugfix releases again from now on! But we also want to cut down on the regressions that come with rapid development so we’re making beta releases again. Please help the team out and check these beta releases for bugs and regressions. Right there in the welcome screen is a link to a survey where you can give your feedback:

The survey idea is new — we want to get your impressions of what we’re about to release. If it works, we’ll be refining the surveys.

What’s new in 4.2.6?

New features:
  • Add new layer from visible to layer right-click context menu.
  • When running Krita for the first time on Windows, Angle is now the default renderer. Note that if you have an NVidia GPU and Krita’s window is transparent, you need to select Angle manually in Krita’s settings; if you have another GPU and you have problems with the canvas not updating, you might need to manually select OpenGL in the same window.

We want to especially thank Karl Ove Hufthammer for his extensive work on polishing the translatable string.

Bugs fixed
  • Allow selection overlay to be reset to default. (BUG:410470)
  • Set date for bundle creation to use ISO-Date. (BUG:410490)
  • Fix freeze with 32bit float tiff by using our own tiff reader for the thumbnails. (BUG:408731)
  • Ensure filter mask button is disabled appropriately depending on whether the filter supports it. (BUG:410374)
  • Enable the small color selector if opengles is available as well (BUG:410602)
  • Fix mixed Zoom, Pan, Rotate on macOS (BUG:410698)
  • Ensure that checkboxes are shown in menus even when using the fusion theme
  • Isolate Layer Crash (BUG:408785)
  • Properly fix font resetting when all the text in the editor removed (BUG:409243)
  • Fix lags in Move Tool when using tablet device (BUG:410838)
  • Fix Shift and Alt modifiers in Outline Selection Tool (BUG:410532)
  • Ensure Convert group to Animated Layer shows text in the toolbar. (BUG:410500)
  • Allow ‘Add Clone Layer’ to Work on Multiple Layers (BUG:373338)
  • Fix saving animated transparency masks created through conversion (BUG:409895)
  • Fix curve change despite ‘Use same curve’ checked (BUG:383909)
  • Try harder to make sure that the swap location is writable
  • Properly handle timezones in bundles
  • Allow ‘Add Clone Layer’ to Work on Multiple Layers (BUG:373338)
  • Fix saving animated transparency masks created through conversion (BUG:409895)
  • Make sure all the settings dialogs pages are always shown in the same order
  • Make the settings dialog fit in low-res screens (BUG:410793)
  • Remove misleading ‘px’ suffix for ‘move amount’ shortcut setting
  • Make string for reasons for image export problems translatable (BUG:406973)
  • Fix crash when creating a bezier curve (BUG:410572)
  • Fix deadlocks in KoShapeManager (BUG:410909, BUG:410572)
  • Fix a deadlock when using broken Wacom drivers on Linux (BUG:410797)
  • Fix absolute brush rotation on rotated canvas (BUG:292726)
  • Fix deadlock when removing reference image (BUG:411212)
  • Fix a deadlock in handling of vector objects (BUG:411365)
Download Windows

For the beta, only portable zip files are available. Just open the zip file in Explorer and drag the folder somewhere convenient, then double-click on the krita icon in the folder. This will not impact an installed version of Krita, though it will share your settings and custom resources with your regular installed version of Krita. For reporting crashes, also get the debug symbols folder.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)


Note: the gmic-qt is not available on OSX.

Source code md5sum

For all downloads:

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.


A short report on Krita Sprint 2019

Saturday 31st of August 2019 04:42:52 AM

This Krita Sprint was bigger than ever, or so I’ve heard, as this is only my second one, and because of the amount of things that happend deciding on what to write about was not easy. The Sprint did a lot to create stronger bonds between the different Krita actors: developers and artists. Dicussions between the groups allowed us to set effective development goals for the upcoming Krita version as well as showing there were some processes in need of polishing in order to be truly effective –quality control and testing timeframes come to mind–.

I focused mainly in knowing how other artists used Krita, which varies significantly between them. Most artists seem to work on a fixed way, but they do it in controlled environment so results are always consistent. This makes it very important to make all features discoverable in not only one way, sine once an artists find a confortable workflow they will rarely get out of it and will never get to know some tools they need but they never stumble upon. This might be the case for artists coming from other applications as tools could be placed were they do not expect them to be. For example, one artist suggested we should have a liquify tool, unknowinly that the “tool” was already there, but contrary to what they expected the tool was not a filter but rather a suboption in the transformation tool.

Above it can be seen the different stages of work, debugging, discussion, drawing. The portraits are a fraction of an idea to improve the “about us” profile page, the idea came up during the meeting.

Also I took the oportunity to get to know how other krita developers work, specially in regards on how they approach users to gain a better undertanding on how things should behave. This is specially important on bugs not caused by a function output incorrectness but bugs on usage quirks strongly related with the workflows the artists follow to complete the task and get a more optimized, less clunky, flow of actions. Developers can study how the tool should work and implement it in that direction, but the actual users will use the tool expecting it to work seamessly like the mental model they already possess –coming from traditional workflow or also another painting software desing ideas– of the action they want to perform to get a result. Of course we can’t just blindly implement any feature or change a user requests, but seeing how the discussion slowly narrows down on how to achieve a result in a sensible and/or logical workflow, was a good learning experience.

Short Chronicle of the Sprint

I arrived on August 6th, with almost all members already in the working space. I quickly found a place to start working and after greeting everyone I helped my GSoC mentoree to setup the development environment on a spare system we had as he could not travel with a system of its own. I started looking at the artists working on their illustrations and how they approached devs to discuss improvements and new ideas. In the mean time, between discussions, I did some debugging to fix a couple of bugs. This was also the first time I saw the HDR painting Krita station in action, the color selector is not only a color selector but a light selector as it shines with strenght. Artists who used it described the sensation more as painting with light instead of using pigment.

The second day, on August 7th, boud had arranged a visit to the Open Lucht Museum, an open museum full of historic buildings, preserved functinal windmills and watermills, and some other devices from other times. All houses were fitted with objects from the time the house was built, with depictions of how life in the Netherlands was troughtout history. We splitted in groups to find interesting places to draw and get inspiration for creative ideas. The museum was very productive and after visiting it we head for dinner for a long talk and good food.

On august 8th we had the big meeting, sitting all together in a discussion that lasted the entire day. We talk all topics in dept as everyone was there: from user reachout, quality control, feature request, bug report and fix rates, development priorities, possible goals for next year. For each and every important task to tackle a task in phabricator was scheduled to be created latter that sprint. The ideas were discussed and after a long day we went to have a very good dinner were talk and discussions kept going, now in a more relaxed manner.

I stayed a couple more days at the Sprint headquarters and worked on bugs on the day. At night I had good conversations with Wolthera and boud about many topics. Sadly all things come to an end but I headed back full of inspiration and work, lots of work to do for the next half year

Akademy Ahead!

Friday 30th of August 2019 10:00:00 PM

In just one week, it’s Akademy-time!

Akademy is the yearly get-together of the KDE community and of KDE e.V. (the association that supports the community’s activities). As always, the conference and attendance is free (gratis).

I’m doing two short talks at Akademy. Neither of them are about my “core business” but they are about things I think are important in the KDE community:

  • KDE Frameworks cover a lot of ground, and they are less-well publicised than they should be. I get fairly regular comments from Harald like “hey, you could use framework instead of this hand-written code” and he’s generally right.

    So, this is my talk to pass his wisdom on to others.

    In many ways, this talk will be like one of the C++ STL overview talks, like by Jonathan Boccara but for KDE Frameworks. And much shorter, and less-in-depth.

  • Matrix was added to the KDE community’s communication channels about six months ago. Not without some hiccups, for sure. It’s an addition to existing communication channels. Some of the channels I hang around in moved to Matrix, and in doing so we lost IRC-bot support.

    Of course there’s tons of libraries and applications for Matrix, so I sat down and used one of them – libqmatrixclient, now libQuotient – to write a bot for Matrix channels. It runs meetings, and handles cups of coffee. This talk will be partly promotion for libQuotient, partly do-cool-things-because-you-can. And also only 10 minutes.

There’s a whole weekend of talks followed by a week of hacking and BoFs. See you all there!

GSoC’19 Project : Milestone 3

Friday 30th of August 2019 04:59:06 PM

The third milestone for my Google Summer of Code 2019’s project porting KDE Connect to Windows involves porting the remaining plugins of the linux build so they work similarly on the Windows build. Cool stuff!

There are a lot of plugins in KDE Connect that improve the user experience by providing various features. The project team keeps working hard (in their free time only as a volunteer service) to maintain and create new feature-rich plugins that comprise the usability of KDE Connect. Information about these plugins and their implementation in the Windows build as on the time of writing the post is stated below:-

Plugin NameDescriptionWorks on Windows?How to Access/ Use?batteryrelays the battery information (%age levels, low battery notification et al)
for more info: Right click on the KDE Connect system tray icon -> <connected device name>
Low battery notification will show up when battery level goes down a certain amount (depends on other device’s settings)clipboardsyncs the clipboard of connected devices to have same content available on both devices.
for more info: Works passively; enabled by default.
Simply copy some text on one device and it will get to the other device’s clipboard instantlycontactsallows access to contacts of the connected device.
for more info: The contacts are stored as VCards, having one contact file for each contact. These are saved in the kdeconnect configs folder for now. Refer to documentation at ring your phone from desktop.Right click on the KDE Connect system tray icon -> <connected device name> -> Ring device
findthisdeviceremotely ring your desktop from phoneFOR ANDROID: Tap the hamburger menu (top right) in the KDE Connect App -> Ringlockdeviceremotely lock the desktop REASON: It can be worked on, if users are interested in this feature mousepadallows controlling the mouse cursor from your phone FOR ANDROID: Open KDE Connect App -> Remote Inputmpriscontrolallows controlling the media playback of desktop apps through other devices(play, pause, next, previous)REASON: Technical limitations due to the operating system don’t allow the full experience as seen in the linux build of KDE Connectmprisremotecontrol the media playback of connected device remotely via your desktopREASON: The implementation is done via plasmoids (part of KDE Plasma) , which are not implemented for Windows in any shape or form.notificationsreceive mobile notifications on your PC and interact with them as you would on your mobile phoneWorks passively, enabled by default.pausemusicpauses/ mutes any playing media (on desktop) when there is a call on connected mobileWorks passively; enabled by default.
NOTE: Due to technical limitations of the operating system, we cannot reliably pause any/ every media that was playing when the mobile was called. Hence, on Windows, the desktop volume will only be muted when there is a call, and un-muted accordingly when the user is done with the call.photoclick a photo on your mobile and instantly transfer it to your desktop works from within kdeconnect-cli.exe : kdeconnect-cli.exe -d <device_id_of_your_phone> --photopingsend a notification to the remote device Open KDE Connect Settings -> Pingpresenterpoint to items on desktop screen using mobile as a pointing device FOR ANDROID: Open KDE Connect App -> Slideshow Remote -> Tap and hold POINTER buttonremotecommandstrigger commands predefined on the remote device REASON: It can be worked on, if users are interested in this feature remotecontrolRemotely control connected desktopREASON: It can be worked on, if users are interested in this feature remotekeyboardUse your keyboard to send key-events to your paired device FOR ANDROID: Open KDE Connect App -> Remote Input -> Keyboard Icon in the top right cornerremotesystemvolumecontrol the volume of other connected desktops REASON: It can be worked on, if users are interested in this feature runcommandExecute console commands remotely from mobileAdd commands from within KDE Connect Settings -> Run commands plugin -> Settings
To trigger the added commands:-
FOR ANDROID: Open KDE Connect App -> Run Commandscreensaver-inhibit prevent your device from going to sleep while connected Works passively.
Enable the plugin from KDE Connect Settingssendnotificationssend desktop notifications to your mobileREASON: It can be worked on, if there are enough users who would like to have this featuresftpBrowse the remote device filesystem using SFTP Right click on the KDE Connect system tray icon -> <connected device name> -> Browse deviceshareshare files, text, url from your mobile to desktop FOR ANDROID:-
To Share Text: Select some text/ URL -> Share -> KDE Connect -> <Your Device Name>
NOTE: Yes, the software will automatically detect the difference between a simple text and a URL being shared and handle them differently. It’s awesome, I know.
To Share File(s): Open KDE Connect App -> Send files smsManage mobile’s SMSes from your desktop. Send SMSes and view received ones.Right click on the KDE Connect system tray icon -> <connected device name> -> SMS Messagessystemvolumeremotely control the volume of desktop from mobile FOR ANDROID: Open KDE Connect App -> Multimedia Control
telephonynotify user on desktop about any incoming/ missed call(s)Works passively; enabled by default

You can try out the latest build of kdeconnect from this link :

Just head on to this link, and click on the link that says something like kdeconnect-kde-master-XXX-windows-msvc20XX_64-cl.exe

This link will always have the latest build (the latest compiled code on which the developers work) available for you to download, in the foreseeable future! (Yes, bookmark it!)

UPDATE: The binary-factory builds will not have the best support for Windows Notifications until the next release of KDE Frameworks! If you want to try out the latest build, you can either it compile it yourself using the instructions here or I’ll link a build in the comments below, so you can try that out instead too!

I’ll be back with a HOWTO for trying out both types of the build -> 1) as a Windows Store app (.Appx package), 2) as a desktop app (.exe installer)

Happy KDEing!

Shubham (shubham)

Friday 30th of August 2019 08:42:06 AM

About me...huh, Who am I?I am Shubham, a 3rd year undergraduate student, pursuing my B.E(Bachelor of Engineering) at BMS Institute of Technology and Management, Bangalore, India. I am an amateur open source enthusiast and developer, mostly working with C++ with Qt framework to build some stand alone applications. I also have decent knowlege of C, Java, Python, bash scripting, git and I love developing under the linux environment. I also practice competitve programming at various online judges. Apart from coding, in my spare time I go for Cricket or Volleyball to keep myself refreshed.

Plasma session weirdness in FreeBSD

Thursday 29th of August 2019 10:00:00 PM

We – the KDE-FreeBSD team – have been puzzling over sessions management for a bit when running a Plasma desktop (plain X11) on FreeBSD. There’s something tricksy going on with ConsoleKit sessions:

  • When started from SDDM, the menu, Leave allows you to logout or to lock the system. Any automatic screen lock or screen blanking never comes on. ck-list-sessions lists a session for you, but active is set to false.
  • When started from startx (e.g. by hand, no display manager) with the recommended .xinitrc, you have all the options you are entitled to according to configurations elsewhere in the system, so you may have shutdown, reboot, etc. and in this case running ck-list-sessions shows an active session.

We’re not sure what’s up here, but I figured I’ve give a little notice that there’s something weird, and it can be worked around by switching off the displaymanager and using this .xinitrc:

exec /usr/local/bin/ck-launch-session /usr/local/bin/startkde

Akademy Schedule Mobile Access

Thursday 29th of August 2019 05:25:15 PM

Hello people of this extraordinary world. =D

Last weekend I worked in a improved version of Akademy schedule that I launched last year.

This year the website is inside KDE Infra thanks to SysAdming work. You can check it on this link:

I hope that we soon add the BoF’s informations.

So far you can add the website to your apps menu with Firefox and Chrome (Hope that on Ios too).

Please check it out and leave your comments below!

If you want to check the source code please go to:

That’s all folks!

Plasma Browser Integration 1.6

Thursday 29th of August 2019 10:00:28 AM

I’m pleased to announce the immediate availability of Plasma Browser Integration version 1.6 on the Chrome Web Store as well as Firefox Add-Ons page.

Konqi surfing the world wide web

Plasma Browser Integration bridges the gap between your browser and the Plasma desktop. It lets you share links, find browser tabs in KRunner, monitor download progress in the notification center, and control music and video playback anytime from within Plasma, or even from your phone using KDE Connect!

What’s new? Improved media controls

Media controls have seen most improvements this development cycle: They can now handle the “muted” state of a player as well as ask the browser to enter and exit full screen via MPRIS. However, the browser might refuse to enter full screen since from the browser’s point of view it wasn’t requested through explicit user interaction.

The extension can now address multiple players embedded as iframes on the same page properly. Previously, pressing play or pause would cause all players on that page to start or stop playback. Generally, iframe player support has been vastly improved, now able to survive document.write calls and control iframes created purely through scripting. Furthermore, support for controlling Audio objects created from pure JavaScript will now consider Chrome’s audio focus stealing prevention.

Enhanced Media Controls, which adds support for Media Session API, enabling websites to provide detailed track information, album art, and more playback controls, has been updated to changes in the specification and made less invasive. It is enabled by default now. For existing installations you might have to enable this option explicitly in extension settings.

Better error reporting A crossed icon indicates it’s not running and the popup gives some advice about what’s going on

This update moves all status and error reporting to a toolbar icon (“browser action”). Previously, in case the extension failed to start or encountered an error, a notification popup was shown. This was especially bothersome when the extension was automatically synced to a computer where it wasn’t installed or supported. Currently, the popup does not offer any functionality besides error reporting and is disabled when everything’s alright. Future versions may add additional functionality to the popup leaving it enabled the entire time.


The threshold for preventing controlling short players has been increased from 5 to 8 seconds. This threshold keeps it from trying to control your “new message” sound but now also avoids controlling short hover video previews on certain sites.

Finally, there has been some work under the hood to make the code more maintainable and extensible. For instance, since the so-called “native host” that acts as a translator between browser and desktop is released by distributions independently of the browser extension, an infrastructure has been added so that the extension can query which features are actually supported on the host side. This allows us to make substantial changes to the extension while at the same time keeping compatibility with older Plasma versions. It also enables us to add new features in the extension that will then just not be offered in case they are not supported by the host.

KDE websites infrastructure update and new websites

Thursday 29th of August 2019 09:35:35 AM

Since my latest post two months ago, a lot of things changed regarding the KDE websites. More and more KDE websites are switching to the Aether theme designed by Ken Vermette. You can follow the progress at the Phabricator task T10827.

New websites
  • All three wikis were updated to MediaWiki 1.31 and are now using the Aether MediaWiki theme. There are some visual glitches when using the dark theme version that can still be observed. They are usually very simple to fix, so please report them in

  • The French speaking KDE websites was also updated to use the Aether Jekyll theme. It’s only an aesthetic change and some parts of the content still need to be updated. If you speak French and want to help, contact the French speaking KDE community in #kde-fr (IRC/Matrix) or via email.

  • Choqok also got a new website.

Behind the scenes Using a single codebase for the CSS theming

One of the big problems encountered was the multiplication of different versions of the CSS files. There is a CCS file used by and, one for all the MediaWiki instances, and one for This was getting harder and harder to maintain, so I decided to create a single SASS codebase for all the KDE websites.

The code is located in the KDE Gitlab instance and uses Symfony Encore to generate all the CSS files from the SASS codebase.

For the moment, the CSS code is only split into multiple SASS modules and the tooling builds multiple versions using some generic components (breeze buttons) and other more specific components (MediaWiki dark theme).

Compiling the SASS files to CSS is done using the KDE Binary Factory and produces two versions of each file, one with versioning (e.g. boostrap.hi8yh2huh.css) which is intended for all the dynamics websites (MediaWiki,, …) and another one without versioning for the static websites (Hugo and Jekyll).

Using versioned assets in a MediaWiki skin

MediaWiki uses ResourceLoader to load assets, but in our case we want to use the Webpack generated manifest.json to load the versioned file. For this, I’m using a Symfony component asset that is doing most of the job for me.

Here is the code used:

$urlPackage = new UrlPackage( '', new JsonManifestVersionStrategy('') ); $out->addStyle( $urlPackage->getUrl('aether-devel/version/bootstrap.css'), 'all' );

I actually created a fork of the JsonManifestVersionStrategy class because of a bug in the library. I hope my change will be merged upstream.

New features in the Jekyll theme

The Jekyll theme got some nice new features added by various contributors.

And special thanks to Mark Winter for fixing a small typo.

Junior Job

There are multiple other websites in the pipeline and any help is welcome.

If you are interested in one of this tasks, join us in #kde-www in IRC/Matrix.

Websites Bof at Akademy

I’m going to Akademy and I will organize a BoF about the websites. It will take place Tuesday, 10th September 2019 in room U2-08b, just after the Plasma BoFs.

If you are interested and want to attend, you can put yourself in the list T11423.

Mounamnamat Médias Teaches Animation using Krita

Thursday 29th of August 2019 09:20:27 AM

Amine Sossi Alaoui and Sonia Didier write to tell us about their experience teaching children 2D animation using Krita, with some very cool results:

We’re a Moroccan animation studio, created 6 months ago and based in Rabat, Morocco. Before that we worked in animation studios in France during 10 years. Our goal now is to develop animation industry in Morocco and Africa. It’s a long way to go, and for the moment, we’re just beginning with 2d animation. Krita is a great tool for that, and we’re very happy to use it, and to share the knowledge we have about it.

So, this summer, we wanted children to learn about 2D animation, so we created an one-week animation course for children from 8 to 14 years old. It was 2 hours per day during 5 days, for a group of 8 to 12 children. The goal was to create an one-minute animated shortfilm in 2D, from the writing of the story, storyboard, background, animation, colorisation and compositing.

For that we chose to use Krita (and Shotcut for the final compositing and sound). It’s great software, very complete and fun to work with. And as it’s free, we’re sure that the children could use it at home if they like, to make their own projects.

Finally, four groups of children have worked on Krita this summer, and the result is on Youtube for anyone to watch on our youtube channel. Here are the first three films; the fourth film is not yet on youtube. Should be shortly, when it’s finalized.

We hope you’ll all like it !

Now we hope that we’ll have others projects with Krita, and we’ll be happy
to share them with you.

Cloud providers and telemetry via Qt MQTT

Thursday 29th of August 2019 07:01:57 AM

This is a follow up to my previous posts about using Qt MQTT to connect to the cloud. MQTT is a prominent standard for telemetry, especially in the IoT scenario.


We are often approached by Qt customers and users on how to connect to a variety of cloud providers, preferably keeping the requirements list short.

With this post I would like to provide some more information on how to create a connection by just using Qt, without any third-party dependency. For this comparison we have chosen the following cloud providers:

The ultimate summary can be viewed in this table


The source code to locally test the results is available here.

However, if you are interested in this topic, I recommend preparing a pitcher of coffee and continue reading…

And if you are want to jump to a specific topic, use these shortcuts:

Getting connected
Standard derivation (limitations)
Available (custom) topics
Communication routes
Other / references
Additional notes
How can I test this myself?
Closing words

Preface / Setting expectations

Before getting into the details I would like to emphasize some details.

First, the focus is on getting devices connected to the cloud. Being able to send and receive messages is the prime target. This post will not talk about services, features, or costs by the cloud providers themselves once messages are in the cloud.

Furthermore, the idea is to only use Qt and/or Qt MQTT to establish a connection. Most, if not all, vendors provide SDKs for either devices or monitoring (web and native) applications. However, using these SDKs extends the amount of additional dependencies, leading to higher requirements for storage and memory.

The order in which the providers are being evaluated in this article is based on public usage according to this article.

Getting connected

The very first steps for sending messages are to create a solution for each vendor and then establish a TCP connection.

Amazon IoT Core

We assume that you have created an AWS account and an IoT Core service from your AWS console in the browser.

The dashboard of this service looks like this:

The create button open a wizard which will help setting up the first device.

The only required information is the name of the device. All other items can be left empty.

The service allows to automatically create a certificate to be used for a connection later.

Store the certificates (including the root CA) and keep them available to be used in an application.

For now, no policy is required. But we will get into this at a later stage.

The last missing piece to start implementing an example is the hostname to connect to. AWS provides a list of endpoints here. Please note, that for MQTT you must use the account-specific prefix. Also, you can find the information on the settings page of the AWS IOT dashboard.

Using Qt MQTT, a connection is then established with those few lines:

const QString host = QStringLiteral("<your-endpoint>"); const QString rootCA = QStringLiteral("root-CA.crt"); const QString local = QStringLiteral("<device>.cert.pem"); const QString key = QStringLiteral("<device>.private.key"); QMqttClient client; client.setKeepAlive(10000); client.setHostname(host); client.setPort(8883); client.setClientId("basicPubSub"); QSslConfiguration conf; conf.setCaCertificates(QSslCertificate::fromPath(rootCA)); conf.setLocalCertificateChain(QSslCertificate::fromPath(local)); QSslKey sslkey(readKey(key), QSsl::Rsa); conf.setPrivateKey(sslkey); client.connectToHostEncrypted(conf);

A couple of details are important for a successful connection:

  • The keepalive value needs to be within a certain threshold. 10 seconds seem to be a good indicator.
  • Port 8883 is the standardized port for encrypted MQTT connections.
  • The ClientID must be basicPubSub. This is a valid ID auto-generated during the creation of the IoT Core instance.


Microsoft Azure IoT Hub

First, an account for the Azure Portal needs to be created. From the dashboard you need to create a new “Iot Hub” resource.

The dashboard can be overwhelming initially, as Microsoft puts many cloud services and features on the fore-front. As the focus is on getting a first device connected, the simplest way is to go to Shared access policies and create a new access policy with all rights enabled.

This is highly discouraged in a production environment for security reasons.

Selecting the freshly-created policy we can copy the connection string.

Following, we will use the Azure Device Explorer application, which can be downloaded here. This application suits perfectly for testing purposes. After launch, enter the connection string from above into the connection test edit and click update.

The management tab allows for creating new test devices, specifying either an authentication via X509 or Security Keys. Security keys are the preselected standard method, which we aim at as well.

Lastly, the Device Explorer allows us to create a SAS token, which will be needed to configure the MQTT client. A token has the following shape:

HostName=<yourIoTHub>;DeviceId=<yourDeviceName>;SharedAccessSignature=SharedAccessSignature sr==<yourIoTHub>…..

We only need this part for authentication:

SharedAccessSignature sr==<yourIoTHub>…..

The Azure IoT Hub uses TLS for the connection as well. To achieve the root CA, you can clone the Azure IoT C SDK located here or obtain the DigiCert Baltimore Root Certificate manually. Neither the web interface nor the Device Explorer provides it.

To establish a connection from a Qt application using Qt MQTT the code looks like this

const QString iotHubName = QStringLiteral("<yourIoTHub>"); const QString iotHubHostName = iotHubName + QStringLiteral(""); const QString deviceId = QStringLiteral("<yourDeviceName>"); QMqttClient client; client.setPort(8883); client.setHostname(iotHubHostName); client.setClientId(deviceId); client.setUsername(iotHubHostName + QStringLiteral("/") + deviceId + QStringLiteral("/?api-version=2018-06-30")); client.setPassword(QLatin1String("SharedAccessSignature sr=<yourIoTHub>…")); auto caCerts = QSslCertificate::fromData(QByteArray(certificates)); QSslConfiguration sslConf; sslConf.setCaCertificates(caCerts); client.connectToHostEncryped(sslConf); Google Cloud IoT Core

Once you have created an account for the Google Cloud Platform, the web interface provides a wizard to get your first project running using Cloud IoT Core.

Once the project has been created, it might be hard to find your registry. A registry stores all information on devices, communication, rules, etc.

Similar to Microsoft Azure, all available services are placed on the dashboard. You will find the IoT Core item in the Big Data section on the left side.

After using the Google Cloud Platform for a while, you will find the search very useful to get to your target page.

From the registry itself you can now add new devices.

The interface asks you to provide the keys/certificates for your device. But it does not have a mean to create some from the service itself. Documentation exists on how to create these. And at production stage those steps will probably be automated in a different manner. However, for getting started these are additional steps required, which can become a hurdle.

Once your device is entered into the registry, you can start with the client side implementation.

Contrary to other providers, Google Cloud IoT Core does not use the device certificate while creating a connection. Instead, the private key is used for the password creation. The password itself needs to be generated as a JSON Web Token. While JSON Web Tokens are an open industry standard, this adds another dependency to your project. Something needs to be able to create these tokens. Google provides some sample code here, but adaptations to include it into an application are required.

The client ID for the MQTT connection is constructed of multiple parameters and has the following form:


From personal experience, be aware of case sensitivity. Everything but the Project ID keeps the same capitalization as you created your project, registry and device. However, the project ID will be stored in all lower-case.

Having considered all of this, the simplest implementation to establish a connection looks like this:

const QString rootCAPath = QStringLiteral("root_ca.pem"); const QString deviceKeyPath = QStringLiteral("rsa_private.pem"); const QString clientId = QStringLiteral("projects/PROJECT_ID/locations/REGION/registries/REGISTRY_ID/devices/DEVICE_ID"); const QString googleiotHostName = QStringLiteral(""); const QString password = QByteArray(CreateJwt(deviceKeyPath, "<yourprojectID>", "RS256");); QMqttClient client; client.setKeepAlive(60); client.setPort(8883); client.setHostname(googleiotHostName); client.setClientId(clientId); client.setPassword(password); QSslConfiguration sslConf; sslConf.setCaCertificates(QSslCertificate::fromPath(rootCAPath)); client.connectToHostEncrypted(sslConf); Alibaba Cloud IoT Platform

The Alibaba Cloud IoT Platform is the only product which does come in multiple variants, a basic and a pro version. As of writing of this article this product structure seems to have changed. From what we can say it does not have an influence on the MQTT related items investigated here.

After creating an account for the Alibaba Cloud the web dashboard allows to create a new IoT Platform instance.

Following the instantiation, a wizard interface allows to create a product and a device.

From these we need a couple of details to establish a MQTT connection.

  • Product Key
  • Product Secret
  • Device Name
  • Device Secret

The implementation requires a couple of additional steps. To acquire all MQTT specific properties, the Client ID, username and password are created by concatenations and signing. This procedure is fully documented here. For convenience the documentation also includes example source code to handle this. If the concern is to not introduce external code, the instructions in the first link have to be followed.

To connect a QMqtt client instance, this is sufficient

iotx_dev_meta_info_t deviceInfo; qstrcpy(deviceInfo.product_key, "<yourproductkey>"); qstrcpy(deviceInfo.product_secret, "<yourproductsecret>"); qstrcpy(deviceInfo.device_name, "<yourdeviceID>"); qstrcpy(deviceInfo.device_secret, "<yourdeviceSecret>"); iotx_sign_mqtt_t signInfo; int32_t result = IOT_Sign_MQTT(IOTX_CLOUD_REGION_GERMANY, &deviceInfo, &signInfo); QMqttClient client; client.setKeepAlive(10000); client.setHostname(QString::fromLocal8Bit(signInfo.hostname)); client.setPort(signInfo.port); client.setClientId(QString::fromLocal8Bit(signInfo.clientid)); client.setUsername(QString::fromLocal8Bit(signInfo.username)); client.setPassword(QString::fromLocal8Bit(signInfo.password)); client.connectToHost();


You might recognize that we are not using QMqttClient::connectToHostEncrypted() as for all other providers. The Alibaba Cloud IoT Platform is the only vendor, which uses a non-TLS connection by default. It is documented, that it is possible to use one and also to receive a rootCA. However, the fact that this is possible after all surprises.

Standard derivation (limitations)

So far, we have established a MQTT connection to each of the IoT vendors. Each uses a slightly different approach to identify and authenticate a device, but all of these services follow the MQTT 3.1.1 standard.

However, for the next steps developers need to be aware of certain limitations or variations to the standard. These will be discussed next.

None of the providers have built-in support for quality-of-service (QoS) level 2. To some extend that makes sense, as telemetry information do not require multiple steps to verify message delivery. Whether a message is processed and validated is not of interest in this scenario. A developer should be aware of this limitation though.

To refresh our memory on terminology, let us briefly recap retained and will messages.

Retained messages are stored on the server side for future subscribers to receive the last information available on a topic. Will messages embedded to the connection request and will only be propagated in case of an unexpected disconnect from the client.


Amazon IoT Core

The client ID is used to identify a device. If a second device uses the same ID during a connection attempt, then the first device will be disconnected without any notice. The second device will connect successfully. If your application code contains some sort of automatic reconnect, this can cause all devices with the same client ID to be unavailable.

Retained messages are not supported by AWS and trying to send a retained message will cause the connection to be closed.

AWS IoT Core supports will messages within the given allowed topics.

A full description of standard deviations can be viewed here.


Microsoft Azure IoT Hub

The client ID is used to identify a device. The behavior of two devices with the same ID is the same as for Amazon IoT Core.

Retained messages are not supported on the IoT Hub. However, the documentation states that the Hub will internally append a flag to let the backend know that the messages was intended as retained.

Will messages are allowed and supported, given the topic restrictions which will be discussed below.

A full description of standard deviations can be viewed here.


Google Cloud IoT Core

This provider uses the client ID and the password to successfully identify a device.

Messages flagged as retain seem to lose this option during delivery. According to the debug logs they are forwarded as regular messages. We have not found any documentation about whether it might behave similar to the Azure IoT Hub, which forwards this request to its internal message queue.

Will messages do not seem to be supported. While it is possible to store a will message in the connect statement, it does get ignored in case of irregular disconnect.


Alibaba Cloud IoT Platform

The triplet of client ID, username and password are used to identify a device within a product.

Both, the retain flag as well as will messages, are getting ignored from the server side. A message with retain specified is forwarded as a regular message and lost after delivery. Will messages are ignored and not stored anywhere during a connection.

Available (custom) Topics

MQTT uses a topic hierarchy to create a fine-grained context for messages. Topics are similar to a directory structure, starting from generic to device specific. One example of a topic hierarchy would be


Each IoT provider handles topics differently, so developers need to be very careful on this section.


Amazon IoT Core

First, one needs to check which topics can be used by default. From the dashboard, browse to Secure-> Policies and select the default created policy. It should look like this

AWS IoT Core specifies policies in JSON format, and you will find some of the previous details specified in this document. For instance, the available client IDs are specified in the Connect resource. It also allows to declare which topics are valid for publication, subscribing and receiving. It is possible to have multiple policies in place and devices need to have a policy attached. That way, it allows for a fine-grained security model where certain types groups have different access rights.

Note that the topic description also allows wildcards. Those should not be confused with the wildcards in the MQTT standard. Meaning, you must use * instead of # to enable all subtopics.

Once you have created a topic hierarchy based on your needs, the code itself is simple

client.publish(QStringLiteral("topic_1"), "{\"message\":\"Somecontent\"}", 1); client.subscribe(QStringLiteral("topic_1"), 1);


Microsoft Azure IoT Hub

The IoT Hub merely acts as an interface to connect existing MQTT solutions to the Hub. A user is not allowed to specify any custom topic, nor is it possible to introduce a topic hierarchy.

A message can only be published in the following shape:

const QString topic = QStringLiteral("devices/") + deviceId + QStringLiteral("/messages/events/"); client.publish(topic, "{id=123}", 1);


For subscriptions similar limitations exist

client.subscribe(QStringLiteral("devices/") + deviceId + QStringLiteral("/messages/devicebound/#"), 1);


The wildcard for the subscription is used for additional information that the IoT Hub might add to a message. This can be a message ID for instance. To combine multiple properties the subtopic itself is url-encoded. An example message send from the IoT Hub has this topic included



Google Cloud IoT Core

By default, a MQTT client should use this topic for publication


But it is also possible to add additional topics using the Google Cloud Shell or other APIs.

In this case a topic customCross has been created. Those additional topics are reflected as subtopics on the MQTT side, though, meaning to publish a message to this topic, it would be


For subscriptions custom topics are not available and there are only two available topics a client can subscribe to

/devices/<deviceID>/commands/# /devices/<deviceID>/config/

Config messages are retained messages from the cloud. Those will be send every time a client connects to keep the device in sync.


Alibaba Cloud IoT Platform

Topics can easily be managed in the Topic Categories tab of the product dashboard.

Each topic can be configured to receive, send or bidirectional communication. Furthermore, a couple of additional topics are generated by default to help creating a scalable structure.

Note that the topic always contains the device ID. This has implications on communication routes as mentioned below.

Communication routes

Communication in the IoT context can be split into three different categories

  1. Device to Cloud (D2C)
  2. Cloud to Device (C2D)
  3. Device to Device (D2D)

The first category is the most common one. Devices provide information about their state, sensor data or any other kind of information. Talking in the other direction happens in the case of providing behavior instructions, managing debug levels or any generic instruction.

Regarding device-to-device communication, we need to be a bit more verbose on the definition inside this context. A typical example can be taken from the home automation. Given a certain light intensity, the sensor propagates the information and the blinds automatically react to this by going down (Something which never seems to work properly in office spaces ). Here, all logic is handled on the devices and no cloud intelligence is needed. Also, no additional rules or filters need to be created in the cloud instance itself. Surely, all tested providers can instantiate a method running in the cloud and then forwarding a command to another device. But that process is not part of this investigation.


Amazon IoT Core

In the previous section we already covered the D2C and C2D cases. Once a topic hierarchy has been specified a client can publish to these topics, and also subscribe to one.

To verify that the C2D connection works, select the Test tab on the left side of the dashboard. The browser will show a minimal interface, which allows to send a message with a specified topic.

Also, the device-to-device case is handled nicely by subscribing and publishing to a topic as specified in the policy.

Microsoft Azure IoT Hub

It is possible to send messages from a device to the cloud and vice-versa. However, a user is not free to choose a topic.

For sending the Device Explorer is a good utility, especially for testing the property bag feature.

Device to device communication as in our definition is not possible using Azure IoT Hub.

During the creation of this post, this article popped up. It talks about this exact use case using the Azure SDKs instead of plain MQTT. The approach there is to locate the Service SDK on the recipient device. So for bidirectional communication this would be needed on all devices, with the advantage of not routing through any server.

Google Cloud IoT Core

Sending messages from a device to the cloud is possible, allowing further granularity with subtopics for publication. Messages are received on two available topics as discussed in above section.

As the custom topics still include the device ID, it is not possible to use a Google Cloud IoT Core instance as standard broker to propagate messages between devices (D2D).

The dashboard for a device allows to send a command, as well as a configuration from the cloud interface to the device itself.


Alibaba Cloud IoT Platform

Publishing and Subscribing can be done in a flexible manner using the IoT Platform. (Sub-)Topics can be generated to provide more structure.

To test sending a message from the cloud to a device, the Topic List in the device dashboard includes a dialog.

Device to device communication is also possible. Topics for these cannot be freely specified, they must reside exactly one level below


. The topic on this sub-level can be chosen freely.

Other / References Amazon IoT Core Microsoft Azure IoT Hub Google Cloud IoT Core Alibaba Cloud IoT Platform


Additional notes

MQTT version 5 seems to be too young for the biggest providers to adopt to. This is very unfortunate, given that the latest standard adds in a couple of features specifically useful in the IoT world. Shared subscriptions would allow for automatic balancing of tasks, the new authentication command allows for higher flexibility registering devices, connection and message properties enable cloud connectivity to be more performant and easier to restrict/configure, etc. But at this point in time, we will have to wait for its adoption.

Again, I want to emphasize that we have not looked into any of the features above IoT solutions provide to handle messages once received. This is a part of a completely different study and we would be very interested in hearing from your results in that field.


Additionally, we have not included RPC utilization of the providers. Some have hard coded topics to handle RPC like Google differentiating between commands and configuration. Alibaba even uses default topics to handle firmware update notifications via MQTT. TrendMicro has released a study on security related concerns in MQTT and RPC has a prominent spot in there, a must read for anyone setting up a MQTT architecture from scratch.

How can I test this myself?

I’ve created a sample application, which allows to connect to any of the above cloud vendors when required details are available. The interface itself is rather simple:


You can find the source code here on GitHub.

Closing words

For any of the broader IoT and cloud providers it is possible to connect a telemetry-based application using MQTT (and Qt MQTT). Each has different variations on connection details, also to which extend the standard is fully available for developers.

Personally, I look forward to the adoption of MQTT version 5. The AUTH command allows for better integration of authentication methods, other features like topic aliases and properties bring in further use-cases for the IoT world. Additionally, shared-subscription are beneficial to create a data-worker relationship between devices. This last point however might step onto the toes of cloud vendors, as the purpose of them is to handle the load inside the cloud.

I would like to close this post with questions to you.

  • What is your experience with those cloud solutions?
  • Is there anything in the list, I might have missed?
  • Should other vendors or companies be included as well?

Looking forward to your feedback…

The post Cloud providers and telemetry via Qt MQTT appeared first on Qt Blog.

Little Trouble in Big Data – Part 3

Wednesday 28th of August 2019 12:22:54 PM

In the previous two blogs in this series I showed how solving an apparently simple problem about loading a lot of data into RAM using mmap() also turned out to require a solution that improved CPU use across cores.

In this blog, I’ll show how we dealt with the bottleneck problems that ensued, and finally, how we turned to coarse threading to utilize the available cores as well as possible whilst keeping the physical memory usage doable.

These are the stages we went through:

  1. Preprocessing
  2. Loading the Data
  3. Fine-grained Threading

Now we move to Stage 4, where we tackle the bottleneck problem:

4 Preprocessing Reprise

So far so good right? Well yes and no. We’ve improved things to use multithreading and SIMD but profiling in vtune still showed bottlenecks. Specifically, in the IO subsystem, paging the data from disk into system memory (via mmap). The access pattern through the data is the classic thing we see with textures in OpenGL where the texture data doesn’t all fit into GPU memory, so it ends up thrashing the texture cache with the typical throw-out-the-least-recently-used-stuff as of course we need the oldest stuff again on the next iteration of the outer loop.

This is where the expanded, preprocessed data is biting us in the backside. We saved runtime cost at the expense of disk and RAM usage and this is now the biggest bottleneck to the point where we can’t feed the data from disk (SSD) to the CPU fast enough to keep it fully occupied.

The obvious thing would be to reduce the data size, but how? We can’t use the old BED file format, as the quantization used is too coarse for the offset + scaled data. We can’t use lower precision floats as that only reduces by a small constant factor. Inspecting the data of some columns in the matrix, I noticed that there are very many repeated values. Which makes total sense given the highly quantized input data. So, we tried compressing each column using zlib. This worked like magic – the preprocessed data came out only 5% larger than the quantized original BED data file!

Because we are compressing each column of the matrix independently, and the compression ratio varies depending upon the needed dictionary size and the distribution of repeated elements throughout the column, we need a way to be able to find the start and end of each column in the compressed preprocessed bed file. So, whilst preprocessing, we also write out a binary index companion file which, for each column, stores the offset of the column start in the main file and its byte size.

So when wanting to process a column of data in the inner loop, we lookup in the index file the extent of the column’s compressed representation in the mmap()‘d file, decompress that into a buffer of the right size (we know how many elements each column has, it’s the number of people) and then wrap that up in the Eigen Map helper.

Using zlib like this really helped in reducing the storage and memory needed. However, now profiling showed that the bottleneck had shifted to the decompression of the column data. Once again, we have improved things, but we still can’t keep the CPU fed with enough data to occupy it for that inner loop workload.

5 Coarse Threading

How to proceed from here? What we need is a way to balance the CPU threads and cycles used for decompressing the column data with the threads and cycles used to then analyze each column. Remember that we are already using SIMD vectorization and parallel_for and parallel_reduce for the inner loop workload.

After thinking over this problem for a while I decided to have a go at solving it with another feature of Intel TBB, the flow graph. The flow graph is a high-level interface and data driven way to construct parallel algorithms. Once again, behind the scenes this eventually gets decomposed into the threadpool + tasks as used by parallel_for and friends.

The idea is that you construct a graph from various pre-defined node types into which you can plug lambdas for performing certain operations. You can then set options on the different node types and connect them with edges to form a data flow graph. Once set up, you send data into the graph via a simple message class/struct and it flows through until the results fall out of the bottom of the graph.

There are many node types available but for our needs just a few will do:

  • Function node: Use this along with a provided lambda to perform some operation on your data e.g. decompress a column of data or perform the doStuff() inner loop work. This node type can be customized as to how many parallel instantiations of tasks it can make from serial behavior to any positive number. We will have need for both as we shall see.
  • Sequencer node: Use this node to ensure that data arrives at later parts of the flow graph in the correct order. Internally it buffers incoming messages and uses a provided comparison functor to re-order the messages ready for output to successor nodes.
  • Limiter node: Use this node type to throttle the throughput of the graph. We can tell it a maximum number of messages to buffer from predecessor nodes. Once it reaches this limit it blocks any more input messages until another node triggers it to continue.

I’ve made some very simple test cases of the flow graph in case you want to see how it works and how I built up to the final graph we used in practice.

The final graph used looks like this:

A few things to note here:

  1. We have a function node to perform the column decompression. This is allowed to use multiple parallel tasks as each column can be decompressed independently of the others due to the way we compressed the data at preprocess time.
  2. To stop this from decompressing the entire data set as fast as possible and blowing up our memory usage, we limit this with a limiter node set to some small number roughly equal to the number of cores.
  3. We have a second function node limited to sequential behavior that calls back to our algorithm class to do the actual work on each decompressed column of data.

Then we have the two ordering nodes. Why do we need two of them? The latter one ensures that the data coming out of the decompression node tasks arrives in the order that we expect (as queued up by the inner loop). This is needed because, due to the kernel time-slicing the CPU threads, they may finish in a different order to which they were enqueued.

The requirement for the first ordering node is a little more subtle. Without it, the limiter node may select messages from the input in an order such that it fills up its internal buffer but without picking up the first message which it needs to send as an output. Without the ordering node up front, the combination of the second ordering node and the limiter node may cause the graph to effectively deadlock. The second ordering node would be waiting for the nth message, but the limiter node is already filled up with messages which do not include the nth one.

Finally, the last function node which processes the “sequential” (but still SIMD and parallel_for pimped up) part of the work uses a graph edge to signal back to the limiter node when it is done so that the limiter node can then throw the next column of data at the decompressor function node.

With this setup, we have a high level algorithm which is self-balancing between the decompression steps and the sequential doStuff() processing! That is actually really nice, plus it is super simple to express in just a few lines of code and it remains readable for future maintenance. The code to setup this graph and to queue up the work for each iteration is available at github.

The resulting code now uses 100% of all available cores and is balancing the work of decompression and processing the data. Meanwhile the data processing also utilizes all cores well. The upside of moving the inner loop to be represented by the flow graph means that the decompression + column processing went from 12.5s per iteration (on my hexacore i7) to 3s. The 12.5s was measured with the sequential workload already using parallel_for and SIMD. So this is another very good saving.


We have shown how a simple “How do I use mmap()?” mentoring project has grown beyond its initial scope and how we have used mmap, Eigen,parallel_for/parallel_reduce, flow graphs and zlib to nicely make the problem tractable. This has shown a nice set of performance improvements whilst at the same time keeping the disk and RAM usage within feasible limits.

  • Shifted work that can be done once to a preprocessing step
  • Kept the preprocessed data size down as low as possible with compression
  • Managed to load even large datasets into memory at once with mmap
  • Parallelized the inner loop operations at a low level with parallel_for
  • Parallelized the high-level loop using the flow graph and made it self-balancing
  • Fairly optimally utilizing the available cores whilst keeping the physical memory usage down (number of threads used * col size roughly).

Thanks for reading this far! I hope this helps reduce your troubles, when dealing with big data issues.

The post Little Trouble in Big Data – Part 3 appeared first on KDAB.

GSoC ’19 comes to an end

Wednesday 28th of August 2019 12:08:39 PM

GSoC period is officially over and here is a final report of my work in the past 3 months.

QmlRenderer library

The library will be doing the heavy lifting by rendering QML templates to QImage frames using QQuickRenderControl in the new MLT QML producer. Parameters that can be manipulated are:

  • FPS
  • Duration
  • DPI
  • Image Format

The library can be tested using QmlRender (a CLI executable).


./QmlRender -i “/path/to/input/QML/file.qml” -o “/path/to/output/directory/for/frames”

./QmlRender –help reveals all the available options that may be manipulated.

MLT QML Producer

What has been done so far?

  • A working and tested QmlRenderer library
  • Basic code to the QML MLT producer

What work needs to be done?

  • Full-fledged MLT QML producer
  • Basic titler on Kdenlive side to test

Check out the in full GSOC depth report here.

The whole experience for the last 8 months, right from the first patch to the titler project, has been great with a steep learning curve but I have thoroughly enjoyed the whole process. I seek to continue improving Kdenlive and I’m really thankful to all the Kdenlive developers and the community for presenting me with this fine opportunity to work for the revamp of an important feature in our beloved editor.

Although GSoC is “officially” over, the new Titler as a project in whole is far from done and I will continue working on it. So nothing really changes. 

The next update will be when we get a working backend set up – until then!

polkit-qt-1 0.113.0 Released

Tuesday 27th of August 2019 07:09:59 PM

Some 5 years after the previous release KDE has made a new release of polkit-qt-1, versioned 0.113.0.

Polkit (formerly PolicyKit) is a component for controlling system-wide privileges in Unix-like operating systems. It provides an organized way for non-privileged processes to communicate with privileged ones.   Polkit has an authorization API intended to be used by privileged programs (“MECHANISMS”) offering service to unprivileged programs (“CLIENTS”).

Polkit Qt provides Qt bindings and UI.

This release was done ahead of additions to KIO to support Polkit.

GPG fingerprint:

Notable changes since 0.112.0
– Add support for passing details to polkit
– Remove support for Qt4

Thanks to Heiko Becker for his work on this release.

Full changelog

  •  Bump version for release
  •  Don’t set version numbers as INT cache entries
  •  Move cmake_minimum_required to the top of CMakeLists.txt
  •  Remove support for Qt4
  •  Remove unneded documentation
  •  authority: add support for passing details to polkit
  •  Fix typo in comments
  •  polkitqtlistener.cpp – pedantic
  •  Fix build with -DBUILD_TEST=TRUE
  •  Allow compilation with older polkit versions
  •  Fix compilation with Qt5.6
  •  Drop use of deprecated Qt functions REVIEW: 126747
  •  Add wrapper for polkit_system_bus_name_get_user_sync
  •  Fix QDBusArgument assertion
  • do not use global static systembus instance


Day 92 – The last day

Monday 26th of August 2019 12:08:00 PM

After the second coding period, I was in the begin of the backend development. I’ll list and explain what was made in this period. After GSoC, I’ll still work on Khipu to move it out from Beta soon, then, I’ll fix the bugs and try to implement the things that are missing and new features.

GitHub link:

Finished or almost finished tasks:

Plot Dictionaries: an old Khipu feature, it gives to the user examples of valid inputs and about what the old program can do. The old version was a window with premade examples. I tried to make it simpler and it only creates a 2D and 3D space with some examples. I changed the name to “Plot Examples”.

Save/Load files: the save option will save your spaces and plots in a JSON file, and the load option will load them, and it only accepts JSON files.

Edit space name: when the user creates a new space, the default name is “2D Space” or “3D Space”. Then, it allows the user to rename the space.

Edit plot dialog: it’s currently a dialog that should provide to the user the options of set the expression, visibility and color of a plot in some space. It’s currently crashing the program in some situations.

Search box: it should provide to the user the option of find a space. It’s working but there’s a bug when the user clicks in the search result. For example: if the user clicks in the third search result, it will open the third in the original list, not the third result.

Menubar visibility: I chose the F1 key to show/hide the menubar, but it’s not working and I still don’t know why.

Old features that still were not implemented:

Help menu: the menu with information about KDE, Khipu’s developers, documentation, bug report.

Grid settings: it allows the user to select which kind of grid (normal or polar) will be used and set the grid color.

Cilindric surfaces, parametric surfaces, spacial curves, parametric curves, polar curves: I used the Kalgebra mobile as reference to my code, then, I only created a simple box which receives a simple expression. You still can’t set intervals and create other types of functions. But you can plot simple and implicit functions, you can lines and planes like “x=3”, and solve equations like “sin(x) = 1”.

Snapshot: a photo of each space to appear with the space information.

Open Source is more than licenses

Monday 26th of August 2019 10:15:26 AM

A few weeks ago I was honored to deliver the keynote of the Open Source Awards in Edinburgh. I decided to talk about a subject that I wanted to talk about for quite some time but never found the right opportunity for. There is no video recording of my talk but several people asked me for a summary. So I decided to use some spare time in a plane to summarize it in a blog post.

I started to use computers and write software in the early 80s when I was 10 years old. This was also the time when Richard Stallman wrote the 4 freedoms, started the GNU project, founded the FSF and created the GPL. His idea was that users and developers should be in control of the computer they own which requires Free Software. At the time the computing experience was only the personal computer in front of you and the hopefully Free and Open Source software running on it.

The equation was (Personal Hardware) + (Free Software) = (Digital Freedom)

In the meantime the IT world has changed and evolved a lot. Now we have ubiquitous internet access, computer in cars, TVs, watches and other IoT devices. We have the full mobile revolution. We have cloud computing where the data storage and compute are distributed over different data centers owned and controlled by different people and organizations all over the world. We have strong software patents, DRM, code signing and other crypto, software as a service, more closed hardware, social networking and the power of the network effect.

Overall the world has changed a lot since the 80s. Most of the Open Source and Free Software community still focuses mainly on software licenses. I’m asking myself if we are not missing the bigger picture by limiting the Free Software and Open Source movement to licensing questions only.

Richard Stallman wanted to be in control of his computer. Let’s go through some of the current big questions regarding control in IT and let’s see how we are doing:


Facebook is lately under a lot of attack for countless violations of user privacy, being involved in election meddling, triggering a genocide in Myanmar, threatening democracy and many other things. Let’s see if Free Software would solve this problem:

If Facebook would release all the code tomorrow as Free and Open Source software our community would be super happy. WE have won. But would it really solve any problems? I can’t run Facebook on my own computer because I don’t have a Facebook server cluster. And even if I could it would be very lonely there because I would be the only user. So Free Software is important and great but actually doesn’t give users and freedom or control in the Facebook case. More is needed than Free Software licenses.


I hear from a lot of people in the Free and Open Source community that Microsoft is good now. They changed under the latest CEO and are no longer the evil empire. They now ship a Linux kernel in Windows 10 and provide a lot of Free and Open Source tools in their Linux containers in the Azure Cloud. I think it’s definitely a nice step in the right direction but their Cloud solutions still have the strongest vendor lock-in, Windows 10 is not free in price nor gives you freedom. In fact they don’t have an Open Source business model anywhere. They just USE Linux and Open Source. So the fact that more software in the Microsoft ecosystem is now available under Free Software licenses doesn’t give any more freedom to the users.

Machine Learning

Machine Learning is an important new technology that can be used for many thing from picture recognition to voice recognition to self driving cars. The interesting thing is that the hardware and the software alone are useless. What is also needed for a working machine learning system are the data to train the neural network. This training data is often the secret ingredient which is super valuable. So if Tesla would release all their software tomorrow as Free Software and you would buy a Tesla to have access to the hardware than you are still unable to study, build and improve the self driving car functionality. You would need the millions of hours of video recording and driver data to make your neural network useful. So Free software alone is not enough to give users control


There is a lot of discussions in the western world if 5G infrastructure can be trusted. Do we know if there are back doors in cell towers if they are bought from Huawei or other Chinese companies? The Free and Open Source community answers that the software should be licenses under a Free Software license and then all is good. But can we actually check if the software running on the infrastructure is the same we have as source code? For that we would need reproducible builds, access to all the code signing and encryption keys and the infrastructure should fetch new software updates from our update server and not the one provided by the manufacturer. So the software license is important but doesn’t give you the full control and freedom.


Android is a very popular mobile OS in the Free Software community. The reason is that it’s released under a Free Software license. I know a lot of Free Software activists who run a custom build of Android on their phone and only install Free Software from app stores like F-Droid. Unfortunately 99% of normal users out there don’t get these freedoms because their phones can’t be unlocked, or they lack the technical knowledge how to do it or they rely on software that is only available in the Google PlayStore. Users are trapped in the classic vendor lock-in. So the fact that the Android core is Free Software actually doesn’t give much freedom to 99% of all its users.

So what is the conclusion?

I think the Open Source and Free Software community who cares about the 4 freedoms of Stallman and being in control of their digital lives and user freedom has to expand their scope. Free Software licenses are needed but are by far not enough anymore to fight for user freedom and to guarantee users are in control of their digital life. The formula (Personal Hardware) + (Free Software) = (Digital Freedom) is not valid anymore. There are more ingredients needed. I hope that the Free Software community can and will reform itself to focus on more topics than licenses alone. The world needs people who fight for digital rights and user freedoms now more than ever.

[GSoC – 6] Achieving consistency between SDDM and Plasma

Sunday 25th of August 2019 07:16:21 PM

Previously: 1st GSoC post 2nd GSoC post 3rd GSoC post 4th GSoC post 5th GSoC post Roughly a year ago I made a post titled How I'd improve KDE Plasma - a user's point of view. I never shared the post publicly, but revisiting the first topic of the post — "my biggest pet peeve"...... Continue Reading →

Pay another respect to kritacommand--which we are going beyond

Sunday 25th of August 2019 06:34:27 PM

Your work is gonna make Krita significantly different.– Wolthera, Krita developer and digital artist

Krita’s undo system, namely kritacommand, was added 8 years ago to Calligra under the name of kundo2, as a fork of Qt’s undo framework. The use of undo commands, however, might have an even longer history. Undo commands provide a way to revert individual actions. Up to now, most (though not all) undo commands do it by providing two sets of code that do and undo the actions, respectively. Drawbacks of this system includes (1) it is not very easy to manage; (2) it may introduce duplicated code; and (3) it makes it hard to access a previous document state without actually going back to that state. What I do is to start getting rid of such situation.

The plan for a new system is to use shallow copies to store documents at different states. Dmitry said “it was something we really want to do and allows us to make historical brushes (fetch content from earlier document states).” And according to him, he spent years to implement copy-on-write on paint layers. He suggested me to start from vector layers which he thought would be easier since it does not need to be very thread-safe.

I completely understood that was a challenge, but did not realize where the difficult part was until I come here. Copy-on-write is not the challenging part. We have QSharedDataPointer and almost all the work is to routinely replace the same code. Porting tools is more difficult. The old flake tools are running under the GUI thread, which makes no requirement on thread-safety. Technically we do not need to run it in a stroke / in image thread but with no multithreading the tools runs too slowly on some computers (read as “my Thinkpad laptop”) so I am not unwilling to take this extra challenge. In previous posts I described how the strokes work and the problems I encountered. Besides that there are still some problems I need to face.

the HACK code in the stroke strategy

At the last of the strokes post, I proposed a fix to the crash when deleting KisNode, which is messy. After testing with Dmitry at the sprint, we discovered that the real problems lies in KoShapeManager‘s updateTreeCompressor. It is used to schedule updates of its R-tree. However, it is run at the beginning of every other operation so Dmitry says it is no longer needed. After the compressor was removed we are safe to delete the node normally so there would be no need for such hack code.

Path tool crashing when editing calligraphic shapes

Calligraphic shapes, coming from Karbon, is a shape created by hand-drawing. It has many path points and editing it using path tool usually leads to a crash. Dmitry tested it with ASan and discovered the problem occurs because the path points, which is fetched in the GUI thread to paint the canvas, could be deleted when editing the shape. He suggests to apply a lock to the canvas, not allowing the image and GUI threads to access the shapes concurrently.

Keeping selections after undo/redoing

This challenge is a smaller one. The shape selections were not kept, since they are not part of the layer. It was owned by the layer’s shape manager, though, but a cloned layer would take a brand-new shape manager. In addition undo() and redo() will now replace the whole layer, so pointers to original shapes are no longer valid. This means merely keeping the selections from the shape manager would not work. The solution is to map the selected shapes to the cloned layer, which would be kept in the undo command. The strategy I use is similar to what we have done for layers: go through the whole heirarchy of the old layer and push everything into a queue; go through the heirarchy of the cloned layer in the same order and each time take the first shape in the queue; if the popped shape is in the selection, we add its counterpart in the cloned layer to our new selection.

For now the tools should be working and the merge request is prepared for final review. Hopefully it would make its way to master soon.

More in Tux Machines

Devices Leftovers

  • Khadas VIM3L (Amlogic S905D3) Benchmarks, Settings & System Info

    Khadas VIM3L is the first Amlogic S905D3 SBC on the market and is sold as a lower-cost alternative to the company’s VIM3 board with a focus on the HTPC / media player market.

  • Semtech SX1302 LoRa Transceiver to Deliver Cheaper, More Efficient Gateways
  • In-vehicle computer supports new MaaS stack

    Axiomtek’s fanless, rugged “UST100-504-FL” automotive PC runs Ubuntu 18.04 or Windows on 6th or 7th Gen Intel chips, and offers SATA, HDMI, 2x GbE, 4x USB 3.0, 3x mini-PCIe, a slide-rail design, and the new AMS/AXView for MaaS discovery. Axiomtek announced a rugged in-vehicle PC that runs Ubuntu 18.04, Windows 10, or Windows 7 on Intel’s Skylake or Kaby Lake processors. The UST100-504-FL is aimed at “in-vehicle edge computing and video analytics applications,” and is especially suited for police and emergency vehicles, says Axiomtek. There’s also a new Agent MaaS Suite (AMS) IoT management suite available (see farther below).

  • Google Launches the Pixel 4 with Android 10, Astrophotography, and Motion Sense

    Google officially launched today the long rumored and leaked Pixel 4 smartphone, a much-needed upgrade to the Pixel 3 and 3a series with numerous enhancements and new features. The Pixel 4 smartphone is finally here, boasting upgraded camera with astrophotography capabilities so you can shoot the night sky and Milky Way without using a professional camera, a feature that will also be ported to the Pixel 3 and 3a devices with the latest camera app update, as well as Live HDR+ support for outstanding photo quality.

  • Repurposing A Toy Computer From The 1990s

    Our more youthful readers are fairly likely to have owned some incarnation of a VTech educational computer. From the mid-1980s and right up to the present day, VTech has been producing vaguely laptop shaped gadgets aimed at teaching everything from basic reading skills all the way up to world history. Hallmarks of these devices include a miserable monochrome LCD, and unpleasant membrane keyboard, and as [HotKey] found, occasionally a proper Z80 processor. [...] After more than a year of tinkering and talking to other hackers in the Z80 scene, [HotKey] has made some impressive headway. He’s not only created a custom cartridge that lets him load new code and connect to external devices, but he’s also added support for a few VTech machines to z88dk so that others can start writing their own C code for these machines. So far he’s created some very promising proof of concept programs such as a MIDI controller and serial terminal, but ultimately he hopes to create a DOS or CP/M like operating system that will elevate these vintage machines from simple toys to legitimate multi-purpose computers.

today's howtos

Audiocasts/Shows/Screencasts: FLOSS Weekly, Containers, Linux Headlines, Arch Linux Openbox Build and GhostBSD 19.09

  • FLOSS Weekly 551: Kamailio

    Kamailio is an Open Source SIP Server released under GPL, able to handle thousands of call setups per second. Kamailio can be used to build large platforms for VoIP and realtime communications – presence, WebRTC, Instant messaging and other applications.

  • What is a Container? | Jupiter Extras 23

    Containers changed the way the IT world deploys software. We give you our take on technologies such as docker (including docker-compose), Kubernetes and highlight a few of our favorite containers.

  • 2019-10-16 | Linux Headlines

    WireGuard is kicked out of the Play Store, a new Docker worm is discovered, and Mozilla unveils upcoming changes to Firefox.

  • Showing off my Custom Arch Linux Openbox Build
  • GhostBSD 19.09 - Based on FreeBSD 12.0-STABLE and Using MATE Desktop 1.22

    GhostBSD 19.09 is the latest release of GhostBSD. This release based on FreeBSD 12.0-STABLE while also pulling in TrueOS packages, GhostBSD 19.09 also has an updated OpenRC init system, a lot of unnecessary software was removed, AMDGPU and Radeon KMS is now valid xconfig options and a variety of other improvements and fixes.

MX-19 Release Candidate 1 now available

We are pleased to offer MX-19 RC 1 for testing purposes. As usual, this iso includes the latest updates from debian 10.1 (buster), antiX and MX repos. Read more